Sample records for efficient implementation techniques

  1. Implementation of GAMMON - An efficient load balancing strategy for a local computer system

    NASA Technical Reports Server (NTRS)

    Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.

    1989-01-01

    GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.

  2. Traditional versus rule-based programming techniques - Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.

  3. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  4. The improving efficiency frontier of inpatient rehabilitation hospitals.

    PubMed

    Harrison, Jeffrey P; Kirkpatrick, Nicole

    2011-01-01

    This study uses a linear programming technique called data envelopment analysis to identify changes in the efficiency frontier of inpatient rehabilitation hospitals after implementation of the prospective payment system. The study provides a time series analysis of the efficiency frontier for inpatient rehabilitation hospitals in 2003 immediately after implementation of PPS and then again in 2006. Results indicate that the efficiency frontier of inpatient rehabilitation hospitals increased from 84% in 2003 to 85% in 2006. Similarly, an analysis of slack or inefficiency shows improvements in output efficiency over the study period. This clearly documents that efficiency in the inpatient rehabilitation hospital industry after implementation of PPS is improving. Hospital executives, health care policymakers, taxpayers, and other stakeholders benefit from studies that improve health care efficiency.

  5. Human exposure assessment in the near field of GSM base-station antennas using a hybrid finite element/method of moments technique.

    PubMed

    Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A

    2003-02-01

    A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas.

  6. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher was found to reduce its dynamic power consumption by up to 17% when compared to an identical design that did not employ the technique.

  7. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  8. Advanced techniques and technology for efficient data storage, access, and transfer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Miller, Warner

    1991-01-01

    Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.

  9. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  10. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  11. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  12. Implementation and evaluation of ILLIAC 4 algorithms for multispectral image processing

    NASA Technical Reports Server (NTRS)

    Swain, P. H.

    1974-01-01

    Data concerning a multidisciplinary and multi-organizational effort to implement multispectral data analysis algorithms on a revolutionary computer, the Illiac 4, are reported. The effectiveness and efficiency of implementing the digital multispectral data analysis techniques for producing useful land use classifications from satellite collected data were demonstrated.

  13. Efficient implementation of real-time programs under the VAX/VMS operating system

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1985-01-01

    Techniques for writing efficient real-time programs under the VAX/VMS oprating system are presented. Basic operations are presented for executing at real-time priority and for avoiding needlless processing delays. A highly efficient technique for accessing physical devices by mapping to the input/output space and accessing the device registrs directly is described. To illustrate the application of the technique, examples are included of different uses of the technique on three devices in the Langley Avionics Integration Research Lab (AIRLAB): the KW11-K dual programmable real-time clock, the Parallel Communications Link (PCL11-B) communication system, and the Datacom Synchronization Network. Timing data are included to demonstrate the performance improvements realized with these applications of the technique.

  14. An efficient implementation of Forward-Backward Least-Mean-Square Adaptive Line Enhancers

    NASA Technical Reports Server (NTRS)

    Yeh, H.-G.; Nguyen, T. M.

    1995-01-01

    An efficient implementation of the forward-backward least-mean-square (FBLMS) adaptive line enhancer is presented in this article. Without changing the characteristics of the FBLMS adaptive line enhancer, the proposed implementation technique reduces multiplications by 25% and additions by 12.5% in two successive time samples in comparison with those operations of direct implementation in both prediction and weight control. The proposed FBLMS architecture and algorithm can be applied to digital receivers for enhancing signal-to-noise ratio to allow fast carrier acquisition and tracking in both stationary and nonstationary environments.

  15. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  16. A Comparison of LBG and ADPCM Speech Compression Techniques

    NASA Astrophysics Data System (ADS)

    Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.

    Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.

  17. Energy conservation and management system using efficient building automation

    NASA Astrophysics Data System (ADS)

    Ahmed, S. Faiz; Hazry, D.; Tanveer, M. Hassan; Joyo, M. Kamran; Warsi, Faizan A.; Kamarudin, H.; Wan, Khairunizam; Razlan, Zuradzman M.; Shahriman A., B.; Hussain, A. T.

    2015-05-01

    In countries where the demand and supply gap of electricity is huge and the people are forced to endure increasing hours of load shedding, unnecessary consumption of electricity makes matters even worse. So the importance and need for electricity conservation increases exponentially. This paper outlines a step towards the conservation of energy in general and electricity in particular by employing efficient Building Automation technique. It should be noted that by careful designing and implementation of the Building Automation System, up to 30% to 40% of energy consumption can be reduced, which makes a huge difference for energy saving. In this study above mentioned concept is verified by performing experiment on a prototype experimental room and by implementing efficient building automation technique. For the sake of this efficient automation, Programmable Logic Controller (PLC) is employed as a main controller, monitoring various system parameters and controlling appliances as per required. The hardware test run and experimental findings further clarifies and proved the concept. The added advantage of this project is that it can be implemented to both small and medium level domestic homes thus greatly reducing the overall unnecessary load on the Utility provider.

  18. View from ... JSAP Spring meeting 2014: Strive for efficiency

    NASA Astrophysics Data System (ADS)

    Horiuchi, Noriaki

    2014-06-01

    A high energy conversion efficiency and a low fabrication cost are required to make the widespread implementation of solar cells attractive. Researchers are striving to enhance cell performance by developing heterojunction techniques, introducing photonic-crystal structures and proposing new device designs.

  19. Photochemical grid model implementation and application of VOC, NOx, and O3 source apportionment

    EPA Science Inventory

    For the purposes of developing optimal emissions control strategies, efficient approaches are needed to identify the major sources or groups of sources that contribute to elevated ozone (O3) concentrations. Source-based apportionment techniques implemented in photochemical grid m...

  20. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  1. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  2. SU-E-I-68: Practical Considerations On Implementation of the Image Gently Pediatric CT Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Adams, C; Lumby, C

    Purpose: One limitation associated with the Image Gently pediatric CT protocols is practical implementation of the recommended manual techniques. Inconsistency as a result of different practice is a possibility among technologist. An additional concern is the added risk of data error that would result in over or underexposure. The Automatic Exposure Control (AEC) features automatically reduce radiation for children. However, they do not work efficiently for the patients of very small size and relative large size. This study aims to implement the Image Gently pediatric CT protocols in the practical setting while maintaining the use of AEC features for pediatricmore » patients of varying size. Methods: Anthropomorphological abdomen phantoms were scanned in a CT scanner using the Image Gently pediatric protocols, the AEC technique with a fixed adult baseline, and automatic protocols with various baselines. The baselines were adjusted corresponding to patient age, weight and posterioranterior thickness to match the Image Gently pediatric CT manual techniques. CTDIvol was recorded for each examination. Image noise was measured and recorded for image quality comparison. Clinical images were evaluated by pediatric radiologists. Results: By adjusting vendor default baselines used in the automatic techniques, radiation dose and image quality can match those of the Image Gently manual techniques. In practice, this can be achieved by dividing pediatric patients into three major groups for technologist reference: infant, small child, and large child. Further division can be done but will increase the number of CT protocols. For each group, AEC can efficiently adjust acquisition techniques for children. This implementation significantly overcomes the limitation of the Image Gently manual techniques. Conclusion: Considering the effectiveness in clinical practice, Image Gently Pediatric CT protocols can be implemented in accordance with AEC techniques, with adjusted baselines, to achieve the goal of providing the most appropriate radiation dose for pediatric patients of varying sizes.« less

  3. 622-Mbps Orthogonal Frequency Division Multiplexing (OFDM) Digital Modem Implemented

    NASA Technical Reports Server (NTRS)

    Kifle, Muli; Bizon, Thomas P.; Nguyen, Nam T.; Tran, Quang K.; Mortensen, Dale J.

    2002-01-01

    Future generation space communications systems feature significantly higher data rates and relatively smaller frequency spectrum allocations than systems currently deployed. This requires the application of bandwidth- and power-efficient signal transmission techniques. There are a number of approaches to implementing such techniques, including analog, digital, mixed-signal, single-channel, or multichannel systems. In general, the digital implementations offer more advantages; however, a fully digital implementation is very difficult because of the very high clock speeds required. Multichannel techniques are used to reduce the sampling rate. One such technique, multicarrier modulation, divides the data into a number of low-rate channels that are stacked in frequency. Orthogonal frequency division multiplexing (OFDM), a form of multicarrier modulation, is being proposed for numerous systems, including mobile wireless and digital subscriber link communication systems. In response to this challenge, NASA Glenn Research Center's Communication Technology Division has developed an OFDM digital modem (modulator and demodulator) with an aggregate information throughput of 622 Mbps. The basic OFDM waveform is constructed by dividing an incoming data stream into four channels, each using either 16- ary quadrature amplitude modulation (16-QAM) or 8-phase shift keying (8-PSK). An efficient implementation for an OFDM architecture is being achieved using the combination of a discrete Fourier transform (DFT) at the transmitter to digitally stack the individual carriers, inverse DFT at the receiver to perform the frequency translations, and a polyphase filter to facilitate the pulse shaping.

  4. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  5. Traditional versus rule-based programming techniques: Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.

  6. Financial Planning for Energy Efficiency Investments.

    ERIC Educational Resources Information Center

    Business Officer, 1984

    1984-01-01

    Financing options for energy efficiency investments by colleges are outlined by the Energy Task Force of three higher education associations. It is suggested that alternative financing techniques generate a positive cash flow and allow campuses to implement conservation despite fiscal constraints. Since energy conservation saves money, the savings…

  7. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Deutsch, L. J.; Reed, I. S.

    1987-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  8. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, Howard M.; Reed, Irving S.

    1988-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  9. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  10. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  11. Lean principles optimize on-time vascular surgery operating room starts and decrease resident work hours.

    PubMed

    Warner, Courtney J; Walsh, Daniel B; Horvath, Alexander J; Walsh, Teri R; Herrick, Daniel P; Prentiss, Steven J; Powell, Richard J

    2013-11-01

    Lean process improvement techniques are used in industry to improve efficiency and quality while controlling costs. These techniques are less commonly applied in health care. This study assessed the effectiveness of Lean principles on first case on-time operating room starts and quantified effects on resident work hours. Standard process improvement techniques (DMAIC methodology: define, measure, analyze, improve, control) were used to identify causes of delayed vascular surgery first case starts. Value stream maps and process flow diagrams were created. Process data were analyzed with Pareto and control charts. High-yield changes were identified and simulated in computer and live settings prior to implementation. The primary outcome measure was the proportion of on-time first case starts; secondary outcomes included hospital costs, resident rounding time, and work hours. Data were compared with existing benchmarks. Prior to implementation, 39% of first cases started on time. Process mapping identified late resident arrival in preoperative holding as a cause of delayed first case starts. Resident rounding process inefficiencies were identified and changed through the use of checklists, standardization, and elimination of nonvalue-added activity. Following implementation of process improvements, first case on-time starts improved to 71% at 6 weeks (P = .002). Improvement was sustained with an 86% on-time rate at 1 year (P < .001). Resident rounding time was reduced by 33% (from 70 to 47 minutes). At 9 weeks following implementation, these changes generated an opportunity cost potential of $12,582. Use of Lean principles allowed rapid identification and implementation of perioperative process changes that improved efficiency and resulted in significant cost savings. This improvement was sustained at 1 year. Downstream effects included improved resident efficiency with decreased work hours. Copyright © 2013 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  12. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  13. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  14. Physical-chemical treatment of rainwater runoff in recovery and recycling companies: Pilot-scale optimization.

    PubMed

    Blondeel, Evelyne; Depuydt, Veerle; Cornelis, Jasper; Chys, Michael; Verliefde, Arne; Van Hulle, Stijin Wim Henk

    2015-01-01

    Pilot-scale optimisation of different possible physical-chemical water treatment techniques was performed on the wastewater originating from three different recovery and recycling companies in order to select a (combination of) technique(s) for further full-scale implementation. This implementation is necessary to reduce the concentration of both common pollutants (such as COD, nutrients and suspended solids) and potentially toxic metals, polyaromatic hydrocarbons and poly-chlorinated biphenyls frequently below the discharge limits. The pilot-scale tests (at 250 L h(-1) scale) demonstrate that sand anthracite filtration or coagulation/flocculation are interesting as first treatment techniques with removal efficiencies of about 19% to 66% (sand anthracite filtration), respectively 18% to 60% (coagulation/flocculation) for the above mentioned pollutants (metals, polyaromatic hydrocarbons and poly chlorinated biphenyls). If a second treatment step is required, the implementation of an activated carbon filter is recommended (about 46% to 86% additional removal is obtained).

  15. Costing behavioral interventions: a practical guide to enhance translation.

    PubMed

    Ritzwoller, Debra P; Sukhanova, Anna; Gaglio, Bridget; Glasgow, Russell E

    2009-04-01

    Cost and cost effectiveness of behavioral interventions are critical parts of dissemination and implementation into non-academic settings. Due to the lack of indicative data and policy makers' increasing demands for both program effectiveness and efficiency, cost analyses can serve as valuable tools in the evaluation process. To stimulate and promote broader use of practical techniques that can be used to efficiently estimate the implementation costs of behavioral interventions, we propose a set of analytic steps that can be employed across a broad range of interventions. Intervention costs must be distinguished from research, development, and recruitment costs. The inclusion of sensitivity analyses is recommended to understand the implications of implementation of the intervention into different settings using different intervention resources. To illustrate these procedures, we use data from a smoking reduction practical clinical trial to describe the techniques and methods used to estimate and evaluate the costs associated with the intervention. Estimated intervention costs per participant were $419, with a range of $276 to $703, depending on the number of participants.

  16. Obfuscatable multi-recipient re-encryption for secure privacy-preserving personal health record services.

    PubMed

    Shi, Yang; Fan, Hongfei; Xiong, Guoyue

    2015-01-01

    With the rapid development of cloud computing techniques, it is attractive for personal health record (PHR) service providers to deploy their PHR applications and store the personal health data in the cloud. However, there could be a serious privacy leakage if the cloud-based system is intruded by attackers, which makes it necessary for the PHR service provider to encrypt all patients' health data on cloud servers. Existing techniques are insufficiently secure under circumstances where advanced threats are considered, or being inefficient when many recipients are involved. Therefore, the objectives of our solution are (1) providing a secure implementation of re-encryption in white-box attack contexts and (2) assuring the efficiency of the implementation even in multi-recipient cases. We designed the multi-recipient re-encryption functionality by randomness-reusing and protecting the implementation by obfuscation. The proposed solution is secure even in white-box attack contexts. Furthermore, a comparison with other related work shows that the computational cost of the proposed solution is lower. The proposed technique can serve as a building block for supporting secure, efficient and privacy-preserving personal health record service systems.

  17. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  18. Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU

    NASA Astrophysics Data System (ADS)

    Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.

    2016-09-01

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

  19. Hardware implementation of hierarchical volume subdivision-based elastic registration.

    PubMed

    Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj

    2006-01-01

    Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.

  20. Implementing a trustworthy cost-accounting model.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  1. Discrete square root filtering - A survey of current techniques.

    NASA Technical Reports Server (NTRS)

    Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.

    1971-01-01

    Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.

  2. High-Speed TCP Testing

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Gassman, Holly; Beering, Dave R.; Welch, Arun; Hoder, Douglas J.; Ivancic, William D.

    1999-01-01

    Transmission Control Protocol (TCP) is the underlying protocol used within the Internet for reliable information transfer. As such, there is great interest to have all implementations of TCP efficiently interoperate. This is particularly important for links exhibiting long bandwidth-delay products. The tools exist to perform TCP analysis at low rates and low delays. However, for extremely high-rate and lone-delay links such as 622 Mbps over geosynchronous satellites, new tools and testing techniques are required. This paper describes the tools and techniques used to analyze and debug various TCP implementations over high-speed, long-delay links.

  3. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    PubMed

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  4. Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.

    2004-01-01

    Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.

  5. Evaluation of extraction methods for preparative scale obtention of mangiferin and lupeol from mango peels (Mangifera indica L.).

    PubMed

    Ruiz-Montañez, G; Ragazzo-Sánchez, J A; Calderón-Santoyo, M; Velázquez-de la Cruz, G; de León, J A Ramírez; Navarro-Ocaña, A

    2014-09-15

    Bioactive compounds have become very important in the food and pharmaceutical markets leading research interests seeking efficient methods for extracting these bioactive substances. The objective of this research is to implement preparative scale obtention of mangiferin and lupeol from mango fruit (Mangifera indica L.) of autochthonous and Ataulfo varieties grown in Nayarit, using emerging extraction techniques. Five extraction techniques were evaluated: maceration, Soxhlet, sonication (UAE), microwave (MAE) and high hydrostatic pressures (HHP). Two maturity stages (physiological and consumption) as well as peel and fruit pulp were evaluated for preparative scale implementation. Peels from Ataulfo mango at consumption maturity stage can be considered as a source of mangiferin and lupeol using the UEA method as it improves extraction efficiency by increasing yield and shortening time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. The Use of Efficient Broadcast Protocols in Asynchronous Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Schmuck, Frank Bernhard

    1988-01-01

    Reliable broadcast protocols are important tools in distributed and fault-tolerant programming. They are useful for sharing information and for maintaining replicated data in a distributed system. However, a wide range of such protocols has been proposed. These protocols differ in their fault tolerance and delivery ordering characteristics. There is a tradeoff between the cost of a broadcast protocol and how much ordering it provides. It is, therefore, desirable to employ protocols that support only a low degree of ordering whenever possible. This dissertation presents techniques for deciding how strongly ordered a protocol is necessary to solve a given application problem. It is shown that there are two distinct classes of application problems: problems that can be solved with efficient, asynchronous protocols, and problems that require global ordering. The concept of a linearization function that maps partially ordered sets of events to totally ordered histories is introduced. How to construct an asynchronous implementation that solves a given problem if a linearization function for it can be found is shown. It is proved that in general the question of whether a problem has an asynchronous solution is undecidable. Hence there exists no general algorithm that would automatically construct a suitable linearization function for a given problem. Therefore, an important subclass of problems that have certain commutativity properties are considered. Techniques for constructing asynchronous implementations for this class are presented. These techniques are useful for constructing efficient asynchronous implementations for a broad range of practical problems.

  7. Research on an augmented Lagrangian penalty function algorithm for nonlinear programming

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1978-01-01

    The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.

  8. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, H.M.; Reed, I.S.

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less

  9. Lossless compression techniques for maskless lithography data

    NASA Astrophysics Data System (ADS)

    Dai, Vito; Zakhor, Avideh

    2002-07-01

    Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.

  10. Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.

    PubMed

    Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos

    2015-01-01

    The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.

  11. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  12. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  13. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  14. A proposed technique for the Venus balloon telemetry and Doppler frequency recovery

    NASA Technical Reports Server (NTRS)

    Jurgens, R. F.; Divsalar, D.

    1985-01-01

    A technique is proposed to accurately estimate the Doppler frequency and demodulate the digitally encoded telemetry signal that contains the measurements from balloon instruments. Since the data are prerecorded, one can take advantage of noncausal estimators that are both simpler and more computationally efficient than the usual closed-loop or real-time estimators for signal detection and carrier tracking. Algorithms for carrier frequency estimation subcarrier demodulation, bit and frame synchronization are described. A Viterbi decoder algorithm using a branch indexing technique has been devised to decode constraint length 6, rate 1/2 convolutional code that is being used by the balloon transmitter. These algorithms are memory efficient and can be implemented on microcomputer systems.

  15. An Information System Development Method Connecting Business Process Modeling and its Experimental Evaluation

    NASA Astrophysics Data System (ADS)

    Okawa, Tsutomu; Kaminishi, Tsukasa; Kojima, Yoshiyuki; Hirabayashi, Syuichi; Koizumi, Hisao

    Business process modeling (BPM) is gaining attention as a measure of analysis and improvement of the business process. BPM analyses the current business process as an AS-IS model and solves problems to improve the current business and moreover it aims to create a business process, which produces values, as a TO-BE model. However, researches of techniques that connect the business process improvement acquired by BPM to the implementation of the information system seamlessly are rarely reported. If the business model obtained by BPM is converted into UML, and the implementation can be carried out by the technique of UML, we can expect the improvement in efficiency of information system implementation. In this paper, we describe a method of the system development, which converts the process model obtained by BPM into UML and the method is evaluated by modeling a prototype of a parts procurement system. In the evaluation, comparison with the case where the system is implemented by the conventional UML technique without going via BPM is performed.

  16. CERES: An ab initio code dedicated to the calculation of the electronic structure and magnetic properties of lanthanide complexes.

    PubMed

    Calvello, Simone; Piccardo, Matteo; Rao, Shashank Vittal; Soncini, Alessandro

    2018-03-05

    We have developed and implemented a new ab initio code, Ceres (Computational Emulator of Rare Earth Systems), completely written in C++11, which is dedicated to the efficient calculation of the electronic structure and magnetic properties of the crystal field states arising from the splitting of the ground state spin-orbit multiplet in lanthanide complexes. The new code gains efficiency via an optimized implementation of a direct configurational averaged Hartree-Fock (CAHF) algorithm for the determination of 4f quasi-atomic active orbitals common to all multi-electron spin manifolds contributing to the ground spin-orbit multiplet of the lanthanide ion. The new CAHF implementation is based on quasi-Newton convergence acceleration techniques coupled to an efficient library for the direct evaluation of molecular integrals, and problem-specific density matrix guess strategies. After describing the main features of the new code, we compare its efficiency with the current state-of-the-art ab initio strategy to determine crystal field levels and properties, and show that our methodology, as implemented in Ceres, represents a more time-efficient computational strategy for the evaluation of the magnetic properties of lanthanide complexes, also allowing a full representation of non-perturbative spin-orbit coupling effects. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Baumgartner, Robert

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savingsmore » from energy efficiency programs.« less

  18. Implementation of a SVWP-based laser beam shaping technique for generation of 100-mJ-level picosecond pulses.

    PubMed

    Adamonis, J; Aleknavičius, A; Michailovas, K; Balickas, S; Petrauskienė, V; Gertus, T; Michailovas, A

    2016-10-01

    We present implementation of the energy-efficient and flexible laser beam shaping technique in a high-power and high-energy laser amplifier system. The beam shaping is based on a spatially variable wave plate (SVWP) fabricated by femtosecond laser nanostructuring of glass. We reshaped the initially Gaussian beam into a super-Gaussian (SG) of the 12th order with efficiency of about 50%. The 12th order of the SG beam provided the best compromise between large fill factor, low diffraction on the edges of the active media, and moderate intensity distribution modification during free-space propagation. We obtained 150 mJ pulses of 532 nm radiation. High-energy, pulse duration of 85 ps and the nearly flat-top spatial profile of the beam make it ideal for pumping optical parametric chirped pulse amplification systems.

  19. Experimental statistical signature of many-body quantum interference

    NASA Astrophysics Data System (ADS)

    Giordani, Taira; Flamini, Fulvio; Pompili, Matteo; Viggianiello, Niko; Spagnolo, Nicolò; Crespi, Andrea; Osellame, Roberto; Wiebe, Nathan; Walschaers, Mattia; Buchleitner, Andreas; Sciarrino, Fabio

    2018-03-01

    Multi-particle interference is an essential ingredient for fundamental quantum mechanics phenomena and for quantum information processing to provide a computational advantage, as recently emphasized by boson sampling experiments. Hence, developing a reliable and efficient technique to witness its presence is pivotal in achieving the practical implementation of quantum technologies. Here, we experimentally identify genuine many-body quantum interference via a recent efficient protocol, which exploits statistical signatures at the output of a multimode quantum device. We successfully apply the test to validate three-photon experiments in an integrated photonic circuit, providing an extensive analysis on the resources required to perform it. Moreover, drawing upon established techniques of machine learning, we show how such tools help to identify the—a priori unknown—optimal features to witness these signatures. Our results provide evidence on the efficacy and feasibility of the method, paving the way for its adoption in large-scale implementations.

  20. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  1. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  2. Battery voltage-balancing applications of disk-type radial mode Pb(Zr • Ti)O3 ceramic resonator

    NASA Astrophysics Data System (ADS)

    Thenathayalan, Daniel; Lee, Chun-gu; Park, Joung-hu

    2017-10-01

    In this paper, we propose a novel technique to build a charge-balancing circuit for series-connected battery strings using various kinds of disk-type ceramic Pb(Zr • Ti)O3 piezoelectric resonators (PRs). The use of PRs replaces the whole external battery voltage-balancer circuit, which consists mainly of a bulky magnetic element. The proposed technique is validated using different ceramic PRs and the results are analyzed in terms of their physical properties. A series-connected battery string with a voltage rating of 61.5 V is set as a hardware prototype under test, then the power transfer efficiency of the system is measured at different imbalance voltages. The performance of the proposed battery voltage-balancer circuit employed with a PR is also validated through hardware implementation. Furthermore, the temperature distribution image of the PR is obtained to compare power transfer efficiency and thermal stress under different operating conditions. The test results show that the battery voltage-balancer circuit can be successfully implemented using PRs with the maximum power conversion efficiency of over 96% for energy storage systems.

  3. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  4. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks.

    PubMed

    Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing

    2015-11-12

    One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.

  5. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks

    PubMed Central

    Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing

    2015-01-01

    One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT. PMID:26569260

  6. Control designs for low-loss active magnetic bearings: Theory and implementation

    NASA Astrophysics Data System (ADS)

    Wilson, Brian Christopher David

    Active Magnetic Bearings (AMB) have been proposed for use in Electromechanical Flywheel Batteries. In these devices, kinetic energy is stored in a magnetically levitated flywheel which spins in a vacuum. The AMB eliminates all mechanical losses, however, electrical loss, which is proportional to the square of the magnetic flux, is still significant. For efficient operation, the flux bias, which is typically introduced into the electromagnets to improve the AMB stiffness, must be reduced, preferably to zero. This zero-bias (ZB) mode of operation cripples the classical control techniques which are customarily used and nonlinear control is required. As a compromise between AMB stiffness and efficiency, a new flux bias scheme is proposed called the generalized complementary flux condition (gcfc). A flux-bias dependent trade-off exists between AMB stiffness, power consumption, and power loss. This work theoretically develops and experimentally verifies new low-loss AMB control designs which employ the gcfc condition. Particular attention is paid to the removal of the singularity present in the standard nonlinear control techniques when operating in ZB. Experimental verification is conduced on a 6-DOF AMB reaction wheel. Practical aspects of the gcfc implementation such as flux measurement and flux-bias implementation with voltage mode amplifiers using IR compensation are investigated. Comparisons are made between the gcfc bias technique and the standard constant-flux-sum (cfs) bias method. Under typical operating circumstances, theoretical analysis and experimental data show that the new gcfc bias scheme is more efficient in producing the control flux required for rotor stabilization than the ordinary cfs bias strategy.

  7. Technique for Reduction of Environmental Pollution from Construction Wastes

    NASA Astrophysics Data System (ADS)

    Bakaeva, N. V.; Klimenko, M. Y.

    2017-11-01

    The results of the research on the negative impact construction wastes have on the urban environment and construction ecological safety are described. The research results are based on the statistical data and indicators calculated with the use of environmental pollution assessment in the restoration system of urban buildings technical conditions. The technique for the reduction of environmental pollution from construction wastes is scientifically based on the analytic summary of scientific and practical results for ecological safety ensuring at major overhaul and current repairs (reconstruction) of the buildings and structures. It is also based on the practical application of the probability theory method, system analysis and disperse system theory. It is necessary to execute some stages implementing the developed technique to reduce environmental pollution from construction wastes. The stages include various steps starting from information collection to the system formation with optimum performance characteristics which are more resource saving and energy efficient for the accumulation of construction wastes from urban construction units. The following tasks are solved under certain studies: basic data collection about construction wastes accumulation; definition and comparison of technological combinations at each system functional stage intended for the reduction of construction wastes discharge into the environment; assessment criteria calculation of resource saving and energy efficiency; optimum working parameters of each implementation stage are created. The urban construction technique implementation shows that the resource saving criteria are from 55.22% to 88.84%; potential of construction wastes recycling is 450 million tons of construction damaged elements (parts).

  8. Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors.

    PubMed

    Mahdiani, Hamid Reza; Fakhraie, Sied Mehdi; Lucas, Caro

    2012-08-01

    Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.

  9. Thermal neutron detector based on COTS CMOS imagers and a conversion layer containing Gadolinium

    NASA Astrophysics Data System (ADS)

    Pérez, Martín; Blostein, Juan Jerónimo; Bessia, Fabricio Alcalde; Tartaglione, Aureliano; Sidelnik, Iván; Haro, Miguel Sofo; Suárez, Sergio; Gimenez, Melisa Lucía; Berisso, Mariano Gómez; Lipovetzky, Jose

    2018-06-01

    In this work we will introduce a novel low cost position sensitive thermal neutron detection technique, based on a Commercial Off The Shelf CMOS image sensor covered with a Gadolinium containing conversion layer. The feasibility of the neutron detection technique implemented in this work has been experimentally demonstrated. A thermal neutron detection efficiency of 11.3% has been experimentally obtained with a conversion layer of 11.6 μm. It was experimentally verified that the thermal neutron detection efficiency of this technique is independent on the intensity of the incident thermal neutron flux, which was confirmed for conversion layers of different thicknesses. Based on the experimental results, a spatial resolution better than 25 μm is expected. This spatial resolution makes the proposed technique specially useful for neutron beam characterization, neutron beam dosimetry, high resolution neutron imaging, and several neutron scattering techniques.

  10. A Regev-type fully homomorphic encryption scheme using modulus switching.

    PubMed

    Chen, Zhigang; Wang, Jian; Chen, Liqun; Song, Xinxia

    2014-01-01

    A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level.

  11. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  12. Novel strategy to implement active-space coupled-cluster methods

    NASA Astrophysics Data System (ADS)

    Rolik, Zoltán; Kállay, Mihály

    2018-03-01

    A new approach is presented for the efficient implementation of coupled-cluster (CC) methods including higher excitations based on a molecular orbital space partitioned into active and inactive orbitals. In the new framework, the string representation of amplitudes and intermediates is used as long as it is beneficial, but the contractions are evaluated as matrix products. Using a new diagrammatic technique, the CC equations are represented in a compact form due to the string notations we introduced. As an application of these ideas, a new automated implementation of the single-reference-based multi-reference CC equations is presented for arbitrary excitation levels. The new program can be considered as an improvement over the previous implementations in many respects; e.g., diagram contributions are evaluated by efficient vectorized subroutines. Timings for test calculations for various complete active-space problems are presented. As an application of the new code, the weak interactions in the Be dimer were studied.

  13. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  14. Improving the efficiency of single and multiple teleportation protocols based on the direct use of partially entangled states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br

    We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less

  15. Wideband piezoelectric energy harvester for low-frequency application with plucking mechanism

    NASA Astrophysics Data System (ADS)

    Hiraki, Yasuhiro; Masuda, Arata; Ikeda, Naoto; Katsumura, Hidenori; Kagata, Hiroshi; Okumura, Hidenori

    2015-04-01

    Wireless sensor networks need energy harvesting from vibrational environment for their power supply. The conventional resonance type vibration energy harvesters, however, are not always effective for low frequency application. The purpose of this paper is to propose a high efficiency energy harvester for low frequency application by utilizing plucking and SSHI techniques, and to investigate the effects of applying those techniques in terms of the energy harvesting efficiency. First, we derived an approximate formulation of energy harvesting efficiency of the plucking device by theoretical analysis. Next, it was confirmed that the improved efficiency agreed with numerical and experimental results. Also, a parallel SSHI, a switching circuit technique to improve the performance of the harvester was introduced and examined by numerical simulations and experiments. Contrary to the simulated results in which the efficiency was improved from 13.1% to 22.6% by introducing the SSHI circuit, the efficiency obtained in the experiment was only 7.43%. This would due to the internal resistance of the inductors and photo MOS relays on the switching circuit and the simulation including this factor revealed large negative influence of it. This result suggested that the reduction of the switching resistance was significantly important to the implementation of SSHI.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  17. The cost-effectiveness of quality improvement projects: a conceptual framework, checklist and online tool for considering the costs and consequences of implementation-based quality improvement.

    PubMed

    Thompson, Carl; Pulleyblank, Ryan; Parrott, Steve; Essex, Holly

    2016-02-01

    In resource constrained systems, decision makers should be concerned with the efficiency of implementing improvement techniques and technologies. Accordingly, they should consider both the costs and effectiveness of implementation as well as the cost-effectiveness of the innovation to be implemented. An approach to doing this effectively is encapsulated in the 'policy cost-effectiveness' approach. This paper outlines some of the theoretical and practical challenges to assessing policy cost-effectiveness (the cost-effectiveness of implementation projects). A checklist and associated (freely available) online application are also presented to help services develop more cost-effective implementation strategies. © 2015 John Wiley & Sons, Ltd.

  18. Preparing data for analysis using microsoft Excel.

    PubMed

    Elliott, Alan C; Hynan, Linda S; Reisch, Joan S; Smith, Janet P

    2006-09-01

    A critical component essential to good research is the accurate and efficient collection and preparation of data for analysis. Most medical researchers have little or no training in data management, often causing not only excessive time spent cleaning data but also a risk that the data set contains collection or recording errors. The implementation of simple guidelines based on techniques used by professional data management teams will save researchers time and money and result in a data set better suited to answer research questions. Because Microsoft Excel is often used by researchers to collect data, specific techniques that can be implemented in Excel are presented.

  19. Parallel performance investigations of an unstructured mesh Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    2000-01-01

    A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.

  20. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  1. Prospects of detection of the first sources with SKA using matched filters

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.; Mellema, Garrelt; Choudhuri, Samir; Majumdar, Suman; Giri, Sambit K.

    2018-05-01

    The matched filtering technique is an efficient method to detect H ii bubbles and absorption regions in radio interferometric observations of the redshifted 21-cm signal from the epoch of reionization and the Cosmic Dawn. Here, we present an implementation of this technique to the upcoming observations such as the SKA1-low for a blind search of absorption regions at the Cosmic Dawn. The pipeline explores four dimensional parameter space on the simulated mock visibilities using a MCMC algorithm. The framework is able to efficiently determine the positions and sizes of the absorption/H ii regions in the field of view.

  2. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  3. Strategies for improved efficiency when implementing plant vitrification techniques

    USDA-ARS?s Scientific Manuscript database

    Cryopreservation technologies allow vegetatively propagated genetic resources to be preserved for extended lengths of time. Once successful methods have been established, there is a significant time investment to cryopreserve gene bank collections. Our research seeks to identify methods that could i...

  4. Electromagnetic Counter-Counter Measure (ECCM) Techniques of the Digital Microwave Radio.

    DTIC Science & Technology

    1982-05-01

    Frequency hopping requires special synthesizers and filter banks. Large bandwidth expansion in a microwave radio relay application can best be achieved with...34 processing gain " performance as a function of jammer modulation type " pulse jammer performance • emission bandwidth and spectral shaping 0... spectral efficiency, implementation complexity, and suitability for ECCK techniques will be considered. A sumary of the requirements and characteristics of

  5. Human motion planning based on recursive dynamics and optimal control techniques

    NASA Technical Reports Server (NTRS)

    Lo, Janzen; Huang, Gang; Metaxas, Dimitris

    2002-01-01

    This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.

  6. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  7. Wideband energy harvesting for piezoelectric devices with linear resonant behavior.

    PubMed

    Luo, Cheng; Hofmann, Heath F

    2011-07-01

    In this paper, an active energy harvesting technique for a spring-mass-damper mechanical resonator with piezoelectric electromechanical coupling is investigated. This technique applies a square-wave voltage to the terminals of the device at the same frequency as the mechanical excitation. By controlling the magnitude and phase angle of this voltage, an effective impedance matching can be achieved which maximizes the amount of power extracted from the device. Theoretically, the harvested power can be the maximum possible value, even at off-resonance frequencies. However, in actual implementation, the efficiency of the power electronic circuit limits the amount of power harvested. A power electronic full-bridge converter is built to implement the technique. Experimental results show that the active technique can increase the effective bandwidth by a factor of more than 2, and harvests significantly higher power than rectifier-based circuits at off-resonance frequencies.

  8. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight.

    PubMed

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2015-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI.

  9. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight

    PubMed Central

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2016-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI. PMID:26834607

  10. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  11. NASA software specification and evaluation system: Software verification/validation techniques

    NASA Technical Reports Server (NTRS)

    1977-01-01

    NASA software requirement specifications were used in the development of a system for validating and verifying computer programs. The software specification and evaluation system (SSES) provides for the effective and efficient specification, implementation, and testing of computer software programs. The system as implemented will produce structured FORTRAN or ANSI FORTRAN programs, but the principles upon which SSES is designed allow it to be easily adapted to other high order languages.

  12. Accelerating image recognition on mobile devices using GPGPU

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2011-01-01

    The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.

  13. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  14. Ultra low-power transceiver with novel FSK modulation technique and efficient FSK-to-ASK demodulation.

    PubMed

    Zgaren, Mohamed; Moradi, Arash; Sawan, Mohamad

    2015-01-01

    Energy-efficient and high-data rate are desired in biomedical devices transceivers. A high-performance transmitter (Tx) and an ultra-low-power receiver (Rx) dedicated to medical implants communications operating at Industrial, Scientific and Medical (ISM) frequency band are presented. Tx benefits from a new efficient Frequency-Shift Keying (FSK) modulation technique which provides up to 20 Mb/s of data-rate and consumes only 0.084 nJ/b validated through fabrication. The receiver consists of an FSK-to-ASK conversion based receiver with OOK fully passive wake-up device (WuRx). This WuRx is battery less with energy harvesting technique which plays an important role in making the RF transceiver energy-efficient. The Rx is achieved with a reduced hardware architecture which does not use an accurate local oscillator, high-Q external inductor and I/Q signal path. The Rx shows -78 dBm sensitivity for 8 Mbps data rate while consuming 639 μW. The proposed circuits are implemented in IBM 0.13 μm CMOS technology with 1.2 V supply voltage.

  15. An Initial Multi-Domain Modeling of an Actively Cooled Structure

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur

    1997-01-01

    A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.

  16. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  17. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  18. A novel unsplit perfectly matched layer for the second-order acoustic wave equation.

    PubMed

    Ma, Youneng; Yu, Jinhua; Wang, Yuanyuan

    2014-08-01

    When solving acoustic field equations by using numerical approximation technique, absorbing boundary conditions (ABCs) are widely used to truncate the simulation to a finite space. The perfectly matched layer (PML) technique has exhibited excellent absorbing efficiency as an ABC for the acoustic wave equation formulated as a first-order system. However, as the PML was originally designed for the first-order equation system, it cannot be applied to the second-order equation system directly. In this article, we aim to extend the unsplit PML to the second-order equation system. We developed an efficient unsplit implementation of PML for the second-order acoustic wave equation based on an auxiliary-differential-equation (ADE) scheme. The proposed method can benefit to the use of PML in simulations based on second-order equations. Compared with the existing PMLs, it has simpler implementation and requires less extra storage. Numerical results from finite-difference time-domain models are provided to illustrate the validity of the approach. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Extending the Multi-level Method for the Simulation of Stochastic Biological Systems.

    PubMed

    Lester, Christopher; Baker, Ruth E; Giles, Michael B; Yates, Christian A

    2016-08-01

    The multi-level method for discrete-state systems, first introduced by Anderson and Higham (SIAM Multiscale Model Simul 10(1):146-179, 2012), is a highly efficient simulation technique that can be used to elucidate statistical characteristics of biochemical reaction networks. A single point estimator is produced in a cost-effective manner by combining a number of estimators of differing accuracy in a telescoping sum, and, as such, the method has the potential to revolutionise the field of stochastic simulation. In this paper, we present several refinements of the multi-level method which render it easier to understand and implement, and also more efficient. Given the substantial and complex nature of the multi-level method, the first part of this work reviews existing literature, with the aim of providing a practical guide to the use of the multi-level method. The second part provides the means for a deft implementation of the technique and concludes with a discussion of a number of open problems.

  20. Efficient Device-Independent Entanglement Detection for Multipartite Systems

    NASA Astrophysics Data System (ADS)

    Baccari, F.; Cavalcanti, D.; Wittek, P.; Acín, A.

    2017-04-01

    Entanglement is one of the most studied properties of quantum mechanics for its application in quantum information protocols. Nevertheless, detecting the presence of entanglement in large multipartite states continues to be a great challenge both from the theoretical and the experimental point of view. Most of the known methods either have computational costs that scale inefficiently with the number of particles or require more information on the state than what is attainable in everyday experiments. We introduce a new technique for entanglement detection that provides several important advantages in these respects. First, it scales efficiently with the number of particles, thus allowing for application to systems composed by up to few tens of particles. Second, it needs only the knowledge of a subset of all possible measurements on the state, therefore being apt for experimental implementation. Moreover, since it is based on the detection of nonlocality, our method is device independent. We report several examples of its implementation for well-known multipartite states, showing that the introduced technique has a promising range of applications.

  1. Parallel optoelectronic trinary signed-digit division

    NASA Astrophysics Data System (ADS)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  2. Implementation of efficient trajectories for an ultrasonic scanner using chaotic maps

    NASA Astrophysics Data System (ADS)

    Almeda, A.; Baltazar, A.; Treesatayapun, C.; Mijarez, R.

    2012-05-01

    Typical ultrasonic methodology for nondestructive scanning evaluation uses systematic scanning paths. In many cases, this approach is time inefficient and also energy and computational power consuming. Here, a methodology for the scanning of defects using an ultrasonic echo-pulse scanning technique combined with chaotic trajectory generation is proposed. This is implemented in a Cartesian coordinate robotic system developed in our lab. To cover the entire search area, a chaotic function and a proposed mirror mapping were incorporated. To improve detection probability, our proposed scanning methodology is complemented with a probabilistic approach of discontinuity detection. The developed methodology was found to be more efficient than traditional ones used to localize and characterize hidden flaws.

  3. Evaluation of smartphone-based interaction techniques in a CAVE in the context of immersive digital project review

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Colombet, Florent; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa

    2014-02-01

    Immersive digital project reviews consist in using virtual reality (VR) as a tool for discussion between various stakeholders of a project. In the automotive industry, the digital car prototype model is the common thread that binds them. It is used during immersive digital project reviews between designers, engineers, ergonomists, etc. The digital mockup is also used to assess future car architecture, habitability or perceived quality requirements with the aim to reduce using physical mockups for optimized cost, delay and quality efficiency. Among the difficulties identified by the users, handling the mockup is a major one. Inspired by current uses of nomad devices (multi-touch gestures, IPhone UI look'n'feel and AR applications), we designed a navigation technique taking advantage of these popular input devices: Space scrolling allows moving around the mockup. In this paper, we present the results of a study we conducted on the usability and acceptability of the proposed smartphone-based interaction metaphor compared to traditional technique and we provide indications of the most efficient choices for different use-cases accordingly. It was carried out in a traditional 4-sided CAVE and its purpose is to assess a chosen set of interaction techniques to be implemented in Renault's new 5-sides 4K x 4K wall high performance CAVE. The proposed new metaphor using nomad devices is well accepted by novice VR users and future implementation should allow an efficient industrial use. Their use is an easy and user friendly alternative of the existing traditional control devices such as a joystick.

  4. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  5. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  6. Spatially variant apodization for squinted synthetic aperture radar images.

    PubMed

    Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo

    2007-08-01

    Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.

  7. Requirements and principles for the implementation and construction of large-scale geographic information systems

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  8. Timely Diagnostic Feedback for Database Concept Learning

    ERIC Educational Resources Information Center

    Lin, Jian-Wei; Lai, Yuan-Cheng; Chuang, Yuh-Shy

    2013-01-01

    To efficiently learn database concepts, this work adopts association rules to provide diagnostic feedback for drawing an Entity-Relationship Diagram (ERD). Using association rules and Asynchronous JavaScript and XML (AJAX) techniques, this work implements a novel Web-based Timely Diagnosis System (WTDS), which provides timely diagnostic feedback…

  9. Improving patient safety and optimizing nursing teamwork using crew resource management techniques.

    PubMed

    West, Priscilla; Sculli, Gary; Fore, Amanda; Okam, Nwoha; Dunlap, Cleveland; Neily, Julia; Mills, Peter

    2012-01-01

    This project describes the application of the "sterile cockpit rule," a crew resource management (CRM) technique, targeted to improve efficacy and safety for nursing assistants in the performance of patient care duties. Crew resource management techniques have been successfully implemented in the aviation industry to improve flight safety. Application of these techniques can improve patient safety in medical settings. The Veterans Affairs (VA) National Center for Patient Safety conducted a CRM training program in select VA nursing units. One unit developed a novel application of the sterile cockpit rule to create protected time for certified nursing assistants (CNAs) while they collected vital signs and blood glucose data at the beginning of each shift. The typical nursing authority structure was reversed, with senior nurses protecting CNAs from distractions. This process led to improvements in efficiency and communication among nurses, with the added benefit of increased staff morale. Crew resource management techniques can be used to improve efficiency, morale, and patient safety in the healthcare setting.

  10. A Regev-Type Fully Homomorphic Encryption Scheme Using Modulus Switching

    PubMed Central

    Chen, Zhigang; Wang, Jian; Song, Xinxia

    2014-01-01

    A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level. PMID:25093212

  11. MIROS: A Hybrid Real-Time Energy-Efficient Operating System for the Resource-Constrained Wireless Sensor Nodes

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El

    2014-01-01

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069

  12. MIROS: a hybrid real-time energy-efficient operating system for the resource-constrained wireless sensor nodes.

    PubMed

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; El Gholami, Khalid

    2014-09-22

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant.

  13. Efficient design of CMOS TSC checkers

    NASA Technical Reports Server (NTRS)

    Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling

    1990-01-01

    This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.

  14. Ergodicity of Traffic Flow with Constant Penetration Rate for Traffic Monitoring via Floating Vehicle Technique

    NASA Astrophysics Data System (ADS)

    Gunawan, Fergyanto E.; Abbas, Bahtiar S.; Atmadja, Wiedjaja; Yoseph Chandra, Fajar; Agung, Alexander AS; Kusnandar, Erwin

    2014-03-01

    Traffic congestion in Asian megacities has become extremely worse, and any means to lessen the congestion level is urgently needed. Building an efficient mass transportation system is clearly necessary. However, implementing Intelligent Transportation Systems (ITS) have also been demonstrated effective in various advanced countries. Recently, the floating vehicle technique (FVT), an ITS implementation, has become cost effective to provide real-time traffic information with proliferation of the smartphones. Although many publications have discussed various issues related to the technique, none of them elaborates the discrepancy of a single floating car data (FCD) and the associated fleet data. This work addresses the issue based on an analysis of Sugiyama et al's experimental data. The results indicate that there is an optimum averaging time interval such that the estimated velocity by the FVT reasonably representing the traffic velocity.

  15. Design and optimization of resonance-based efficient wireless power delivery systems for biomedical implants.

    PubMed

    Ramrakhyani, A K; Mirabbasi, S; Mu Chiao

    2011-02-01

    Resonance-based wireless power delivery is an efficient technique to transfer power over a relatively long distance. This technique typically uses four coils as opposed to two coils used in conventional inductive links. In the four-coil system, the adverse effects of a low coupling coefficient between primary and secondary coils are compensated by using high-quality (Q) factor coils, and the efficiency of the system is improved. Unlike its two-coil counterpart, the efficiency profile of the power transfer is not a monotonically decreasing function of the operating distance and is less sensitive to changes in the distance between the primary and secondary coils. A four-coil energy transfer system can be optimized to provide maximum efficiency at a given operating distance. We have analyzed the four-coil energy transfer systems and outlined the effect of design parameters on power-transfer efficiency. Design steps to obtain the efficient power-transfer system are presented and a design example is provided. A proof-of-concept prototype system is implemented and confirms the validity of the proposed analysis and design techniques. In the prototype system, for a power-link frequency of 700 kHz and a coil distance range of 10 to 20 mm, using a 22-mm diameter implantable coil resonance-based system shows a power-transfer efficiency of more than 80% with an enhanced operating range compared to ~40% efficiency achieved by a conventional two-coil system.

  16. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  17. An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures

    NASA Technical Reports Server (NTRS)

    Cohl, H.

    1994-01-01

    We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.

  18. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  19. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  20. Hardware realization of an SVM algorithm implemented in FPGAs

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  1. Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data

    PubMed Central

    Luo, Gang; Frey, Lewis J.

    2017-01-01

    Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems. PMID:25608318

  2. Deterministic implementation of a bright, on-demand single photon source with near-unity indistinguishability via quantum dot imaging.

    PubMed

    He, Yu-Ming; Liu, Jin; Maier, Sebastian; Emmerling, Monika; Gerhardt, Stefan; Davanço, Marcelo; Srinivasan, Kartik; Schneider, Christian; Höfling, Sven

    2017-07-20

    Deterministic techniques enabling the implementation and engineering of bright and coherent solid-state quantum light sources are key for the reliable realization of a next generation of quantum devices. Such a technology, at best, should allow one to significantly scale up the number of implemented devices within a given processing time. In this work, we discuss a possible technology platform for such a scaling procedure, relying on the application of nanoscale quantum dot imaging to the pillar microcavity architecture, which promises to combine very high photon extraction efficiency and indistinguishability. We discuss the alignment technology in detail, and present the optical characterization of a selected device which features a strongly Purcell-enhanced emission output. This device, which yields an extraction efficiency of η = (49 ± 4) %, facilitates the emission of photons with (94 ± 2.7) % indistinguishability.

  3. A novel reversible logic gate and its systematic approach to implement cost-efficient arithmetic logic circuits using QCA.

    PubMed

    Ahmad, Peer Zahoor; Quadri, S M K; Ahmad, Firdous; Bahar, Ali Newaz; Wani, Ghulam Mohammad; Tantary, Shafiq Maqbool

    2017-12-01

    Quantum-dot cellular automata, is an extremely small size and a powerless nanotechnology. It is the possible alternative to current CMOS technology. Reversible QCA logic is the most important issue at present time to reduce power losses. This paper presents a novel reversible logic gate called the F-Gate. It is simplest in design and a powerful technique to implement reversible logic. A systematic approach has been used to implement a novel single layer reversible Full-Adder, Full-Subtractor and a Full Adder-Subtractor using the F-Gate. The proposed Full Adder-Subtractor has achieved significant improvements in terms of overall circuit parameters among the most previously cost-efficient designs that exploit the inevitable nano-level issues to perform arithmetic computing. The proposed designs have been authenticated and simulated using QCADesigner tool ver. 2.0.3.

  4. Three-dimensional near-field MIMO array imaging using range migration techniques.

    PubMed

    Zhuge, Xiaodong; Yarovoy, Alexander G

    2012-06-01

    This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.

  5. Technique adaptation, strategic replanning, and team learning during implementation of MR-guided brachytherapy for cervical cancer.

    PubMed

    Skliarenko, Julia; Carlone, Marco; Tanderup, Kari; Han, Kathy; Beiki-Ardakani, Akbar; Borg, Jette; Chan, Kitty; Croke, Jennifer; Rink, Alexandra; Simeonov, Anna; Ujaimi, Reem; Xie, Jason; Fyles, Anthony; Milosevic, Michael

    MR-guided brachytherapy (MRgBT) with interstitial needles is associated with improved outcomes in cervical cancer patients. However, there are implementation barriers, including magnetic resonance (MR) access, practitioner familiarity/comfort, and efficiency. This study explores a graded MRgBT implementation strategy that included the adaptive use of needles, strategic use of MR imaging/planning, and team learning. Twenty patients with cervical cancer were treated with high-dose-rate MRgBT (28 Gy in four fractions, two insertions, daily MR imaging/planning). A tandem/ring applicator alone was used for the first insertion in most patients. Needles were added for the second insertion based on evaluation of the initial dosimetry. An interdisciplinary expert team reviewed and discussed the MR images and treatment plans. Dosimetry-trigger technique adaptation with the addition of needles for the second insertion improved target coverage in all patients with suboptimal dosimetry initially without compromising organ-at-risk (OAR) sparing. Target and OAR planning objectives were achieved in most patients. There were small or no systematic differences in tumor or OAR dosimetry between imaging/planning once per insertion vs. daily and only small random variations. Peer review and discussion of images, contours, and plans promoted learning and process development. Technique adaptation based on the initial dosimetry is an efficient approach to implementing MRgBT while gaining comfort with the use of needles. MR imaging and planning once per insertion is safe in most patients as long as applicator shifts, and large anatomical changes are excluded. Team learning is essential to building individual and programmatic competencies. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  6. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  7. Barriers impacting the utilization of supervision techniques in genetic counseling.

    PubMed

    Masunga, Abigail; Wusik, Katie; He, Hua; Yager, Geoffrey; Atzinger, Carrie

    2014-12-01

    Clinical supervision is an essential element in training genetic counselors. Although live supervision has been identified as the most common supervision technique utilized in genetic counseling, there is limited information on factors influencing its use as well as the use of other techniques. The purpose of this study was to identify barriers supervisors face when implementing supervision techniques. All participants (N = 141) reported utilizing co-counseling. This was most used with novice students (96.1%) and intermediate students (93.7%). Other commonly used techniques included live supervision where the supervisor is silent during session (98.6%) which was used most frequently with advanced students (94.0%), and student self-report (64.7%) used most often with advanced students (61.2%). Though no barrier to these commonly used techniques was identified by a majority of participants, the most frequently reported barriers included time and concern about patient's welfare. The remaining supervision techniques (live remote observation, video, and audio recording) were each used by less than 10% of participants. Barriers that significantly influenced use of these techniques included lack of facilities/equipment and concern about patient reactions to technique. Understanding barriers to implementation of supervisory techniques may allow students to be efficiently trained in the future by reducing supervisor burnout and increasing the diversity of techniques used.

  8. Implementation of a block Lanczos algorithm for Eigenproblem solution of gyroscopic systems

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.; Lawson, Charles L.

    1987-01-01

    The details of implementation of a general numerical procedure developed for the accurate and economical computation of natural frequencies and associated modes of any elastic structure rotating along an arbitrary axis are described. A block version of the Lanczos algorithm is derived for the solution that fully exploits associated matrix sparsity and employs only real numbers in all relevant computations. It is also capable of determining multiple roots and proves to be most efficient when compared to other, similar, exisiting techniques.

  9. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  10. In-Flight System Identification

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1998-01-01

    A method is proposed and studied whereby the system identification cycle consisting of experiment design and data analysis can be repeatedly implemented aboard a test aircraft in real time. This adaptive in-flight system identification scheme has many advantages, including increased flight test efficiency, adaptability to dynamic characteristics that are imperfectly known a priori, in-flight improvement of data quality through iterative input design, and immediate feedback of the quality of flight test results. The technique uses equation error in the frequency domain with a recursive Fourier transform for the real time data analysis, and simple design methods employing square wave input forms to design the test inputs in flight. Simulation examples are used to demonstrate that the technique produces increasingly accurate model parameter estimates resulting from sequentially designed and implemented flight test maneuvers. The method has reasonable computational requirements, and could be implemented aboard an aircraft in real time.

  11. Application of hierarchical cascading technique to finite element method simulation in bulk acoustic wave devices

    NASA Astrophysics Data System (ADS)

    Li, Xinyi; Bao, Jingfu; Huang, Yulin; Zhang, Benfeng; Omori, Tatsuya; Hashimoto, Ken-ya

    2018-07-01

    In this paper, we propose the use of the hierarchical cascading technique (HCT) for the finite element method (FEM) analysis of bulk acoustic wave (BAW) devices. First, the implementation of this technique is presented for the FEM analysis of BAW devices. It is shown that the traveling-wave excitation sources proposed by the authors are fully compatible with the HCT. Furthermore, a HCT-based absorbing mechanism is also proposed to replace the perfectly matched layer (PML). Finally, it is demonstrated how the technique is much more efficient in terms of memory consumption and execution time than the full FEM analysis.

  12. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  13. A simple white noise analysis of neuronal light responses.

    PubMed

    Chichilnisky, E J

    2001-05-01

    A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.

  14. Design-for-Six-Sigma To Develop a Bioprocess Knowledge Management Framework.

    PubMed

    Junker, Beth; Maheshwari, Gargi; Ranheim, Todd; Altaras, Nedim; Stankevicz, Michael; Harmon, Lori; Rios, Sandra; D'anjou, Marc

    2011-01-01

    Owing to the high costs associated with biopharmaceutical development, considerable pressure has developed for the biopharmaceutical industry to increase productivity by becoming more lean and flexible. The ability to reuse knowledge was identified as one key advantage to streamline productivity, efficiently use resources, and ultimately perform better than the competition. A knowledge management (KM) strategy was assembled for bioprocess-related information using the technique of Design-for-Six-Sigma (DFSS). This strategy supported quality-by-design and process validation efforts for pipeline as well as licensed products. The DFSS technique was selected because it was both streamlined and efficient. These characteristics permitted development of a KM strategy with minimized team leader and team member resources. DFSS also placed a high emphasis on the voice of the customer, information considered crucial to the selection of solutions most appropriate for the current knowledge-based challenges of the organization. The KM strategy developed was comprised of nine workstreams, constructed from related solution buckets which in turn were assembled from the individual solution tasks that were identified. Each workstream's detailed design was evaluated against published and established best practices, as well as the KM strategy project charter and design inputs. Gaps and risks were identified and mitigated as necessary to improve the robustness of the proposed strategy. Aggregated resources (specifically expense/capital funds and staff) and timing were estimated to obtain vital management sponsorship for implementation. Where possible, existing governance and divisional/corporate information technology efforts were leveraged to minimize the additional bioprocess resources required for implementation. Finally, leading and lagging indicator metrics were selected to track the success of pilots and eventual implementation. A knowledge management framework was assembled for bioprocess-related information using a streamlined and efficient technique that minimized team leader and member resources. The technique also highly emphasized input from the staff, who generated and used the knowledge, information considered crucial to selection of solutions most appropriate for the current knowledge-based challenges in the organization. The framework developed was comprised of nine workstreams, constructed from related solution buckets which were assembled from individual solution tasks that were identified. Each workstream's detailed design was evaluated against published and established best practices, as well as the project charter and design inputs. Gaps and risks were identified and mitigated to improve robustness of the proposed framework. Aggregated resources (specifically expense/capital funds and staff) and timing were estimated to obtain vital management sponsorship for implementation. Where possible, existing governance and information technology efforts were leveraged to minimize additional bioprocess resources required for implementation. Finally, metrics were selected to track the success of pilots and eventual implementation.

  15. Efficient grid-based techniques for density functional theory

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hernandez, Juan Ignacio

    Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.

  16. The Series Connected Buck Boost Regulator Concept for High Efficiency Light Weight DC Voltage Regulation

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G.

    2003-01-01

    Improvements in the efficiency and size of DC-DC converters have resulted from advances in components, primarily semiconductors, and improved topologies. One topology, which has shown very high potential in limited applications, is the Series Connected Boost Unit (SCBU), wherein a small DC-DC converter output is connected in series with the input bus to provide an output voltage equal to or greater than the input voltage. Since the DC-DC converter switches only a fraction of the power throughput, the overall system efficiency is very high. But this technique is limited to applications where the output is always greater than the input. The Series Connected Buck Boost Regulator (SCBBR) concept extends partial power processing technique used in the SCBU to operation when the desired output voltage is higher or lower than the input voltage, and the implementation described can even operate as a conventional buck converter to operate at very low output to input voltage ratios. This paper describes the operation and performance of an SCBBR configured as a bus voltage regulator providing 50 percent voltage regulation range, bus switching, and overload limiting, operating above 98 percent efficiency. The technique does not provide input-output isolation.

  17. Rare-Earth Doped Gallium Nitride (GaN)- An Innovative Path Toward Area-scalable Solid-state High Energy Lasers Without Thermal Distortion

    DTIC Science & Technology

    2009-04-01

    technique and its efficiency , the gain medium itself is the bottleneck for non-distortive heat removal—due to the low thermal conductivity of known gain...photoluminescence (PL), electroluminescence (EL), and/or cathodoluminescence (CL) (2,3). As the RE dopant, Nd is an excellent candidate due to its success...highest level of laser efficiency due to the pump and signal mode confinement within a crystalline-guided structure). The successful implementation of

  18. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    PubMed

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  19. Eco Assist Techniques through Real-time Monitoring of BEV Energy Usage Efficiency

    PubMed Central

    Kim, Younsun; Lee, Ingeol; Kang, Sungho

    2015-01-01

    Energy efficiency enhancement has become an increasingly important issue for battery electric vehicles. Even if it can be improved in many ways, the driver’s driving pattern strongly influences the battery energy consumption of a vehicle. In this paper, eco assist techniques to simply implement an energy-efficient driving assistant system are introduced, including eco guide, eco control and eco monitoring methods. The eco guide is provided to control the vehicle speed and accelerator pedal stroke, and eco control is suggested to limit the output power of the battery. For eco monitoring, the eco indicator and eco report are suggested to teach eco-friendly driving habits. The vehicle test, which is done in four ways, consists of federal test procedure (FTP)-75, new european driving cycle (NEDC), city and highway cycles, and visual feedback with audible warnings is provided to attract the driver’s voluntary participation. The vehicle test result shows that the energy usage efficiency can be increased up to 19.41%. PMID:26121611

  20. Column-coupling strategies for multidimensional electrophoretic separation techniques.

    PubMed

    Kler, Pablo A; Sydes, Daniel; Huhn, Carolin

    2015-01-01

    Multidimensional electrophoretic separations represent one of the most common strategies for dealing with the analysis of complex samples. In recent years we have been witnessing the explosive growth of separation techniques for the analysis of complex samples in applications ranging from life sciences to industry. In this sense, electrophoretic separations offer several strategic advantages such as excellent separation efficiency, different methods with a broad range of separation mechanisms, and low liquid consumption generating less waste effluents and lower costs per analysis, among others. Despite their impressive separation efficiency, multidimensional electrophoretic separations present some drawbacks that have delayed their extensive use: the volumes of the columns, and consequently of the injected sample, are significantly smaller compared to other analytical techniques, thus the coupling interfaces between two separations components must be very efficient in terms of providing geometrical precision with low dead volume. Likewise, very sensitive detection systems are required. Additionally, in electrophoretic separation techniques, the surface properties of the columns play a fundamental role for electroosmosis as well as the unwanted adsorption of proteins or other complex biomolecules. In this sense the requirements for an efficient coupling for electrophoretic separation techniques involve several aspects related to microfluidics and physicochemical interactions of the electrolyte solutions and the solid capillary walls. It is interesting to see how these multidimensional electrophoretic separation techniques have been used jointly with different detection techniques, for intermediate detection as well as for final identification and quantification, particularly important in the case of mass spectrometry. In this work we present a critical review about the different strategies for coupling two or more electrophoretic separation techniques and the different intermediate and final detection methods implemented for such separations.

  1. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  2. Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project

    USGS Publications Warehouse

    Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.

    2011-01-01

    Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.

  3. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  4. Parallel halftoning technique using dot diffusion optimization

    NASA Astrophysics Data System (ADS)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  5. Efficient Scalable Median Filtering Using Histogram-Based Operations.

    PubMed

    Green, Oded

    2018-05-01

    Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.

  6. Solar Panel System for Street Light Using Maximum Power Point Tracking (MPPT) Technique

    NASA Astrophysics Data System (ADS)

    Wiedjaja, A.; Harta, S.; Josses, L.; Winardi; Rinda, H.

    2014-03-01

    Solar energy is one form of the renewable energy which is very abundant in regions close to the equator. One application of solar energy is for street light. This research focuses on using the maximum power point tracking technique (MPPT), particularly the perturb and observe (P&O) algorithm, to charge battery for street light system. The proposed charger circuit can achieve 20.73% higher power efficiency compared to that of non-MPPT charger. We also develop the LED driver circuit for the system which can achieve power efficiency up to 91.9% at a current of 1.06 A. The proposed street lightning system can be implemented with a relatively low cost for public areas.

  7. DEVELOPMENTS IN GRworkbench

    NASA Astrophysics Data System (ADS)

    Moylan, Andrew; Scott, Susan M.; Searle, Anthony C.

    2006-02-01

    The software tool GRworkbench is an ongoing project in visual, numerical General Relativity at The Australian National University. Recently, GRworkbench has been significantly extended to facilitate numerical experimentation in analytically-defined space-times. The numerical differential geometric engine has been rewritten using functional programming techniques, enabling objects which are normally defined as functions in the formalism of differential geometry and General Relativity to be directly represented as function variables in the C++ code of GRworkbench. The new functional differential geometric engine allows for more accurate and efficient visualisation of objects in space-times and makes new, efficient computational techniques available. Motivated by the desire to investigate a recent scientific claim using GRworkbench, new tools for numerical experimentation have been implemented, allowing for the simulation of complex physical situations.

  8. Computer enhancement through interpretive techniques

    NASA Technical Reports Server (NTRS)

    Foster, G.; Spaanenburg, H. A. E.; Stumpf, W. E.

    1972-01-01

    The improvement in the usage of the digital computer through the use of the technique of interpretation rather than the compilation of higher ordered languages was investigated by studying the efficiency of coding and execution of programs written in FORTRAN, ALGOL, PL/I and COBOL. FORTRAN was selected as the high level language for examining programs which were compiled, and A Programming Language (APL) was chosen for the interpretive language. It is concluded that APL is competitive, not because it and the algorithms being executed are well written, but rather because the batch processing is less efficient than has been admitted. There is not a broad base of experience founded on trying different implementation strategies which have been targeted at open competition with traditional processing methods.

  9. A Hands-On Approach for Teaching Denial of Service Attacks: A Case Study

    ERIC Educational Resources Information Center

    Trabelsi, Zouheir; Ibrahim, Walid

    2013-01-01

    Nowadays, many academic institutions are including ethical hacking in their information security and Computer Science programs. Information security students need to experiment common ethical hacking techniques in order to be able to implement the appropriate security solutions. This will allow them to more efficiently protect the confidentiality,…

  10. Efficient parallel implementations of QM/MM-REMD (quantum mechanical/molecular mechanics-replica-exchange MD) and umbrella sampling: isomerization of H2O2 in aqueous solution.

    PubMed

    Fedorov, Dmitri G; Sugita, Yuji; Choi, Cheol Ho

    2013-07-03

    An efficient parallel implementation of QM/MM-based replica-exchange molecular dynamics (REMD) as well as umbrella samplings techniques was proposed by adopting the generalized distributed data interface (GDDI). Parallelization speed-up of 40.5 on 48 cores was achieved, making our QM/MM-MD engine a robust tool for studying complex chemical dynamics in solution. They were comparatively used to study the torsional isomerization of hydrogen peroxide in aqueous solution. All results by QM/MM-REMD and QM/MM umbrella sampling techniques yielded nearly identical potentials of mean force (PMFs) regardless of the particular QM theories for solute, showing that the overall dynamics are mainly determined by solvation. Although the entropic penalty of solvent rearrangements exists in cisoid conformers, it was found that both strong intermolecular hydrogen bonding and dipole-dipole interactions preferentially stabilize them in solution, reducing the torsional free-energy barrier at 0° by about 3 kcal/mol as compared to that in gas phase.

  11. High-Speed Photonic Reservoir Computing Using a Time-Delay-Based Architecture: Million Words per Second Classification

    NASA Astrophysics Data System (ADS)

    Larger, Laurent; Baylón-Fuentes, Antonio; Martinenghi, Romain; Udaltsov, Vladimir S.; Chembo, Yanne K.; Jacquot, Maxime

    2017-01-01

    Reservoir computing, originally referred to as an echo state network or a liquid state machine, is a brain-inspired paradigm for processing temporal information. It involves learning a "read-out" interpretation for nonlinear transients developed by high-dimensional dynamics when the latter is excited by the information signal to be processed. This novel computational paradigm is derived from recurrent neural network and machine learning techniques. It has recently been implemented in photonic hardware for a dynamical system, which opens the path to ultrafast brain-inspired computing. We report on a novel implementation involving an electro-optic phase-delay dynamics designed with off-the-shelf optoelectronic telecom devices, thus providing the targeted wide bandwidth. Computational efficiency is demonstrated experimentally with speech-recognition tasks. State-of-the-art speed performances reach one million words per second, with very low word error rate. Additionally, to record speed processing, our investigations have revealed computing-efficiency improvements through yet-unexplored temporal-information-processing techniques, such as simultaneous multisample injection and pitched sampling at the read-out compared to information "write-in".

  12. Development and Evaluation of a Casualty Evacuation Model for a European Conflict.

    DTIC Science & Technology

    1985-12-01

    EVAC, the computer code which implements our technique, has been used to solve a series of test problems in less time and requiring less memory than...the order of 1/K the amount of main memory for a K-commodity problem, so it can solve significantly larger problems than MCNF. I . 10 CHAPTER II A...technique may require only half the memory of the general L.P. package [6]. These advances are due to the efficient data structures which have been

  13. Applications of process improvement techniques to improve workflow in abdominal imaging.

    PubMed

    Tamm, Eric Peter

    2016-03-01

    Major changes in the management and funding of healthcare are underway that will markedly change the way radiology studies will be reimbursed. The result will be the need to deliver radiology services in a highly efficient manner while maintaining quality. The science of process improvement provides a practical approach to improve the processes utilized in radiology. This article will address in a step-by-step manner how to implement process improvement techniques to improve workflow in abdominal imaging.

  14. Reducing and Analyzing the PHAT Survey with the Cloud

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Olsen, Knut; Khan, Rubab; Pirone, Daniel; Rosema, Keith

    2018-05-01

    We discuss the technical challenges we faced and the techniques we used to overcome them when reducing the Panchromatic Hubble Andromeda Treasury (PHAT) photometric data set on the Amazon Elastic Compute Cloud (EC2). We first describe the architecture of our photometry pipeline, which we found particularly efficient for reducing the data in multiple ways for different purposes. We then describe the features of EC2 that make this architecture both efficient to use and challenging to implement. We describe the techniques we adopted to process our data, and suggest ways these techniques may be improved for those interested in trying such reductions in the future. Finally, we summarize the output photometry data products, which are now hosted publicly in two places in two formats. They are in simple fits tables in the high-level science products on MAST, and on a queryable database available through the NOAO Data Lab.

  15. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  16. Multiprocessor smalltalk: Implementation, performance, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallas, J.I.

    1990-01-01

    Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possiblemore » to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.« less

  17. A cloud-based X73 ubiquitous mobile healthcare system: design and implementation.

    PubMed

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji

    2014-01-01

    Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed "big data" processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems.

  18. An efficient and accurate molecular alignment and docking technique using ab initio quality scoring

    PubMed Central

    Füsti-Molnár, László; Merz, Kenneth M.

    2008-01-01

    An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561

  19. Efficient methods and readily customizable libraries for managing complexity of large networks.

    PubMed

    Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can

    2018-01-01

    One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.

  20. A novel process for introducing a new intraoperative program: a multidisciplinary paradigm for mitigating hazards and improving patient safety.

    PubMed

    Rodriguez-Paz, Jose M; Mark, Lynette J; Herzer, Kurt R; Michelson, James D; Grogan, Kelly L; Herman, Joseph; Hunt, David; Wardlow, Linda; Armour, Elwood P; Pronovost, Peter J

    2009-01-01

    Since the Institute of Medicine's report, To Err is Human, was published, numerous interventions have been designed and implemented to correct the defects that lead to medical errors and adverse events; however, most efforts were largely reactive. Safety, communication, team performance, and efficiency are areas of care that attract a great deal of attention, especially regarding the introduction of new technologies, techniques, and procedures. We describe a multidisciplinary process that was implemented at our hospital to identify and mitigate hazards before the introduction of a new technique: high-dose-rate intraoperative radiation therapy, (HDR-IORT). A multidisciplinary team of surgeons, anesthesiologists, radiation oncologists, physicists, nurses, hospital risk managers, and equipment specialists used a structured process that included in situ clinical simulation to uncover concerns among care providers and to prospectively identify and mitigate defects for patients who would undergo surgery using the HDR-IORT technique. We identified and corrected 20 defects in the simulated patient care process before application to actual patients. Subsequently, eight patients underwent surgery using the HDR-IORT technique with no recurrence of simulation-identified or unanticipated defects. Multiple benefits were derived from the use of this systematic process to introduce the HDR-IORT technique; namely, the safety and efficiency of care for this select patient population was optimized, and this process mitigated harmful or adverse events before the inclusion of actual patients. Further work is needed, but the process outlined in this paper can be universally applied to the introduction of any new technologies, treatments, or procedures.

  1. An Efficient VLSI Architecture of the Enhanced Three Step Search Algorithm

    NASA Astrophysics Data System (ADS)

    Biswas, Baishik; Mukherjee, Rohan; Saha, Priyabrata; Chakrabarti, Indrajit

    2016-09-01

    The intense computational complexity of any video codec is largely due to the motion estimation unit. The Enhanced Three Step Search is a popular technique that can be adopted for fast motion estimation. This paper proposes a novel VLSI architecture for the implementation of the Enhanced Three Step Search Technique. A new addressing mechanism has been introduced which enhances the speed of operation and reduces the area requirements. The proposed architecture when implemented in Verilog HDL on Virtex-5 Technology and synthesized using Xilinx ISE Design Suite 14.1 achieves a critical path delay of 4.8 ns while the area comes out to be 2.9K gate equivalent. It can be incorporated in commercial devices like smart-phones, camcorders, video conferencing systems etc.

  2. Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.

    2017-01-01

    Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.

  3. A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport

    NASA Astrophysics Data System (ADS)

    Robinson, P. B.; Peterson, J. D. L.

    2005-12-01

    The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48

  4. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  5. Efficient state initialization by a quantum spectral filtering algorithm

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, François; MacLean, Steve; Laflamme, Raymond

    2017-04-01

    An algorithm that initializes a quantum register to a state with a specified energy range is given, corresponding to a quantum implementation of the celebrated Feit-Fleck method. This is performed by introducing a nondeterministic quantum implementation of a standard spectral filtering procedure combined with an apodization technique, allowing for accurate state initialization. It is shown that the implementation requires only two ancilla qubits. A lower bound for the total probability of success of this algorithm is derived, showing that this scheme can be realized using a finite, relatively low number of trials. Assuming the time evolution can be performed efficiently and using a trial state polynomially close to the desired states, it is demonstrated that the number of operations required scales polynomially with the number of qubits. Tradeoffs between accuracy and performance are demonstrated in a simple example: the harmonic oscillator. This algorithm would be useful for the initialization phase of the simulation of quantum systems on digital quantum computers.

  6. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  7. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    PubMed

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  8. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study

    PubMed Central

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-01-01

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358

  9. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  10. Lean management in academic surgery.

    PubMed

    Collar, Ryan M; Shuman, Andrew G; Feiner, Sandra; McGonegal, Amy K; Heidel, Natalie; Duck, Mary; McLean, Scott A; Billi, John E; Healy, David W; Bradford, Carol R

    2012-06-01

    Lean is a management system designed to enhance productivity by eliminating waste. Surgical practice offers many opportunities for improving efficiency. Our objective was to determine whether systematic implementation of lean thinking in an academic otolaryngology operating room improves efficiency and profitability and preserves team morale and educational opportunities. In an 18-month prospective quasi-experimental study, a multidisciplinary task force systematically implemented lean thinking within an otolaryngology operating room of an academic health system. Operating room turnover time and turnaround time were measured during a baseline period; an observer-effect period in which workers were made aware that their efficiency was being measured but before implementing lean changes; and an intervention period after redesign principles had been used. The impact on teamwork, morale, and surgical resident education were measured during the baseline and intervention periods through validated surveys. A profit model was applied to estimate the financial implications of the study. There was no difference between the baseline and observer-effect periods of the study for turnover time (p = 0.98) or turnaround time (p = 0.20). During the intervention period, the mean turnover time and turnaround time were significantly shorter than during the baseline period (29 vs 38 minutes; p < 0.001 and 69 vs 89 minutes; p < 0.001, respectively). The composite morale score suggested improved morale after implementation (p = 0.011). Educational metrics were unchanged before and after implementation. The annual opportunity revenue for the involved operating room is $330,000; when extrapolated throughout the operating rooms, lean thinking could create 6,500 hours of capacity annually. Application of lean management techniques to a single operating room and surgical service improved operating room efficiency and morale, sustained resident education, and can provide considerable financial gains when scaled to an entire academic surgical suite. Copyright © 2012. Published by Elsevier Inc.

  11. Efficient Project Delivery Using Lean Principles - An Indian Case Study

    NASA Astrophysics Data System (ADS)

    Kovvuri, P. Ramachandra Reddy; Sawhney, Anil; Ahuja, Ritu; Sreekumar, Aiswarya

    2016-03-01

    Construction industry in India is growing at a rapid pace. Along with this growth, the industry is facing numerous challenges that are making delivery of projects inefficient. Experts believe that capacity constraints in the industry need to be addressed immediately. Government has recommended `introduction of efficient technologies and modern management techniques' to increase the productivity of the industry. In this context, lean principles can act as a lever to make project delivery more efficient and provide the much needed impetus to the Indian construction sector. Around the globe lean principles are showing positive results on the projects. Project teams are reporting improvements in construction time, cost and quality along with softer benefits of enhanced collaboration, coordination and trust in project teams. Can adoption of lean principles provide similar benefits in the Indian construction sector? This research was conducted to answer this question. Using an action research approach a key lean construction tool called Last Planner System (LPS) was tested on a large Indian construction project. The work described in this work investigates the improvements achieved in project delivery by adopting LPS in Indian construction sector. Comparison in pre- and post-implementation data demonstrates increase in the certainty of work-flow and improves schedule compliance. This is measured through a simple LPS metric called percent plan complete. Explicit improvements in schedule performance are seen during 8 week LPS implementation along with implicit improvements in coordination, collaboration and trust in the project team. This work reports the findings of LPS implementation on the case study project outlining the barriers and drivers to adoption, strategies needed to ensure successful implementation and roadmap for implementation. Based on the findings the authors envision that lean construction can make project delivery more efficient in India.

  12. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  13. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  14. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  15. Parmeterization of spectra

    NASA Technical Reports Server (NTRS)

    Cornish, C. R.

    1983-01-01

    Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.

  16. Design and implementation of digital controllers for smart structures using field-programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Kelly, Jamie S.; Bowman, Hiroshi C.; Rao, Vittal S.; Pottinger, Hardy J.

    1997-06-01

    Implementation issues represent an unfamiliar challenge to most control engineers, and many techniques for controller design ignore these issues outright. Consequently, the design of controllers for smart structural systems usually proceeds without regard for their eventual implementation, thus resulting either in serious performance degradation or in hardware requirements that squander power, complicate integration, and drive up cost. The level of integration assumed by the Smart Patch further exacerbates these difficulties, and any design inefficiency may render the realization of a single-package sensor-controller-actuator system infeasible. The goal of this research is to automate the controller implementation process and to relieve the design engineer of implementation concerns like quantization, computational efficiency, and device selection. We specifically target Field Programmable Gate Arrays (FPGAs) as our hardware platform because these devices are highly flexible, power efficient, and reprogrammable. The current study develops an automated implementation sequence that minimizes hardware requirements while maintaining controller performance. Beginning with a state space representation of the controller, the sequence automatically generates a configuration bitstream for a suitable FPGA implementation. MATLAB functions optimize and simulate the control algorithm before translating it into the VHSIC hardware description language. These functions improve power efficiency and simplify integration in the final implementation by performing a linear transformation that renders the controller computationally friendly. The transformation favors sparse matrices in order to reduce multiply operations and the hardware necessary to support them; simultaneously, the remaining matrix elements take on values that minimize limit cycles and parameter sensitivity. The proposed controller design methodology is implemented on a simple cantilever beam test structure using FPGA hardware. The experimental closed loop response is compared with that of an automated FPGA controller implementation. Finally, we explore the integration of FPGA based controllers into a multi-chip module, which we believe represents the next step towards the realization of the Smart Patch.

  17. Rapid execution of fan beam image reconstruction algorithms using efficient computational techniques and special-purpose processors

    NASA Astrophysics Data System (ADS)

    Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.

    1981-02-01

    Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.

  18. Tri-state delta modulation system for Space Shuttle digital TV downlink

    NASA Technical Reports Server (NTRS)

    Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.

    1981-01-01

    Future requirements for Shuttle Orbiter downlink communication may include transmission of digital video which, in addition to black and white, may also be either field-sequential or NTSC color format. The use of digitized video could provide for picture privacy at the expense of additional onboard hardware, together with an increased bandwidth due to the digitization process. A general objective for the Space Shuttle application is to develop a digitization technique that is compatible with data rates in the 20-30 Mbps range but still provides good quality pictures. This paper describes a tri-state delta modulation/demodulation (TSDM) technique which is a good compromise between implementation complexity and performance. The unique feature of TSDM is that it provides for efficient run-length encoding of constant-intensity segments of a TV picture. Axiomatix has developed a hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV and field-sequential color. The hardware complexity of this TSDM implementation is summarized in the paper.

  19. SU-F-T-47: MRI T2 Exclusive Based Planning Using the Endocavitary/interstitial Gynecological Benidorm Applicator: A Proposed TPS Library and Preplan Efficient Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richart, J; Otal, A; Rodriguez, S

    Purpose: ABS and GEC-ESTRO have recommended MRI T2 for image guided brachytherapy. Recently, a new applicator (Benidorm Template, TB) has been developed in our Department (Rodriguez et al 2015). TB is fully MRI compatible because the Titanium needles and it allows the use of intrauterine tandem. Currently, TPS applicators library are not currently available for non-rigid applicators in case of interstitial component as the TB.The purpose of this work is to present the development of a library for the TB, together with its use on a pre-planning technique. Both new goals allow a very efficient and exclusive T2 MRI basedmore » planning clinical TB implementation. Methods: The developed library has been implemented in Oncentra Brachytherapy TPS, version 4.3.0 (Elekta) and now is being implemented on Sagiplan v 2.0 TPS (Eckert&Ziegler BEBIG). To model the TB, free and open software named FreeCAD and MeshLab have been used. The reconstruction process is based on three inserted A-vitamin pellets together with the data provided by the free length. The implemented preplanning procedure is as follow: 1) A MRI T2 acquisition is performed with the template in place just with the vaginal cylinder (no uterine tube nor needles). 2) The CTV is drawn and the required needles are selected using a developed Java based application and 3) A post-implant MRI T2 is performed. Results: This library procedure has been successfully applied by now in 25 patients. In this work the use of the developed library will be illustrated with clinical examples. The preplanning procedure has been applied by now in 6 patients, having significant advantages: needle depth estimation, needle positions and number are optimized a priori, time saving, etc Conclusion: TB library and pre-plan techniques are feasible and very efficient and their use will be illustrated in this work.« less

  20. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  1. Machine learning on-a-chip: a high-performance low-power reusable neuron architecture for artificial neural networks in ECG classifications.

    PubMed

    Sun, Yuwen; Cheng, Allen C

    2012-07-01

    Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. A Cloud-Based X73 Ubiquitous Mobile Healthcare System: Design and Implementation

    PubMed Central

    Ji, Zhanlin; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji

    2014-01-01

    Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed “big data” processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems. PMID:24737958

  3. A novel shape-based coding-decoding technique for an industrial visual inspection system.

    PubMed

    Mukherjee, Anirban; Chaudhuri, Subhasis; Dutta, Pranab K; Sen, Siddhartha; Patra, Amit

    2004-01-01

    This paper describes a unique single camera-based dimension storage method for image-based measurement. The system has been designed and implemented in one of the integrated steel plants of India. The purpose of the system is to encode the frontal cross-sectional area of an ingot. The encoded data will be stored in a database to facilitate the future manufacturing diagnostic process. The compression efficiency and reconstruction error of the lossy encoding technique have been reported and found to be quite encouraging.

  4. An efficient nonlinear finite-difference approach in the computational modeling of the dynamics of a nonlinear diffusion-reaction equation in microbial ecology.

    PubMed

    Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E

    2013-12-01

    In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  6. Impact of an electronic medication administration record on medication administration efficiency and errors.

    PubMed

    McComas, Jeffery; Riingen, Michelle; Chae Kim, Son

    2014-12-01

    The study aims were to evaluate the impact of electronic medication administration record implementation on medication administration efficiency and occurrence of medication errors as well as to identify the predictors of medication administration efficiency in an acute care setting. A prospective, observational study utilizing time-and-motion technique was conducted before and after electronic medication administration record implementation in November 2011. A total of 156 cases of medication administration activities (78 pre- and 78 post-electronic medication administration record) involving 38 nurses were observed at the point of care. A separate retrospective review of the hospital Midas+ medication error database was also performed to collect the rates and origin of medication errors for 6 months before and after electronic medication administration record implementation. The mean medication administration time actually increased from 11.3 to 14.4 minutes post-electronic medication administration record (P = .039). In a multivariate analysis, electronic medication administration record was not a predictor of medication administration time, but the distractions/interruptions during medication administration process were significant predictors. The mean hospital-wide medication errors significantly decreased from 11.0 to 5.3 events per month post-electronic medication administration record (P = .034). Although no improvement in medication administration efficiency was observed, electronic medication administration record improved the quality of care with a significant decrease in medication errors.

  7. Low dose scatter correction for digital chest tomosynthesis

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2015-03-01

    Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.

  8. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    NASA Astrophysics Data System (ADS)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  9. Fast Solvers for Moving Material Interfaces

    DTIC Science & Technology

    2008-01-01

    interface method—with the semi-Lagrangian contouring method developed in References [16–20]. We are now finalizing portable C / C ++ codes for fast adaptive ...stepping scheme couples a CIR predictor with a trapezoidal corrector using the velocity evaluated from the CIR approximation. It combines the...formula with efficient geometric algorithms and fast accurate contouring techniques. A modular adaptive implementation with fast new geometry modules

  10. A Novel Patchwork Model Used in Lecture and Laboratory to Teach the Three-Dimensional Organization of Mesenteries

    ERIC Educational Resources Information Center

    Noel, Geoffroy P.J.C.

    2013-01-01

    Anatomy teaching is seeing a decline in both lecture and laboratory hours across many medical schools in North America. New strategies are therefore needed to not only make anatomy teaching more clinically integrated, but also to implement new interactive teaching techniques to help students more efficiently grasp the complex organization of the…

  11. Supply-chain management: exceeding the customer's expectations.

    PubMed

    Ramsay, B

    2000-10-01

    Driven by increasing competition, manufacturers are desperate to cut costs and are looking for increased efficiency and customer service from their supply chains. E-commerce offers a new model of supply and demand, but many companies do not have the processes in place to support this new model. By implementing the techniques discussed here they can achieve substantial improvements in performance.

  12. Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2014-01-01

    Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.

  13. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data

    NASA Astrophysics Data System (ADS)

    Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar

    2017-04-01

    A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.

  14. Developing a Vacuum Electrospray Source To Implement Efficient Atmospheric Sampling for Miniature Ion Trap Mass Spectrometer.

    PubMed

    Yu, Quan; Zhang, Qian; Lu, Xinqiong; Qian, Xiang; Ni, Kai; Wang, Xiaohao

    2017-12-05

    The performance of a miniature mass spectrometer in atmospheric analysis is closely related to the design of its sampling system. In this study, a simplified vacuum electrospray ionization (VESI) source was developed based on a combination of several techniques, including the discontinuous atmospheric pressure interface, direct capillary sampling, and pneumatic-assisted electrospray. Pulsed air was used as a vital factor to facilitate the operation of electrospray ionization in the vacuum chamber. This VESI device can be used as an efficient atmospheric sampling interface when coupled with a miniature rectilinear ion trap (RIT) mass spectrometer. The developed VESI-RIT instrument enables regular ESI analysis of liquid, and its qualitative and quantitative capabilities have been characterized by using various solution samples. A limit of detection of 8 ppb could be attained for arginine in a methanol solution. In addition, extractive electrospray ionization of organic compounds can be implemented by using the same VESI device, as long as the gas analytes are injected with the pulsed auxiliary air. This methodology can extend the use of the proposed VESI technique to rapid and online analysis of gaseous and volatile samples.

  15. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  16. Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.

    PubMed

    Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran

    2017-05-25

    A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results indicate that CS and DCS techniques offer equivalent image reconstruction quality with simpler practical implementation. It would be a very promising approach in practical applications of modern biomedical imaging technology.

  17. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  18. Design and evaluation of a DAMQ multiprocessor network with self-compacting buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.; O`Krafka, B.W.O.; Vassiliadis, S.

    1994-12-31

    This paper describes a new approach to implement Dynamically Allocated Multi-Queue (DAMQ) switching elements using a technique called ``self-compacting buffers``. This technique is efficient in that the amount of hardware required to manage the buffers is relatively small; it offers high performance since it is an implementation of a DAMQ. The first part of this paper describes the self-compacting buffer architecture in detail, and compares it against a competing DAMQ switch design. The second part presents extensive simulation results comparing the performance of a self compacting buffer switch against an ideal switch including several examples of k-ary n-cubes and deltamore » networks. In addition, simulation results show how the performance of an entire network can be quickly and accurately approximated by simulating just a single switching element.« less

  19. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  20. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  2. Improving X-Ray Optics via Differential Deposition

    NASA Technical Reports Server (NTRS)

    Kilaru, Kiranmayee; Ramsey, Brian D.; Atkins, Carolyn

    2017-01-01

    Differential deposition, a post-fabrication figure correction technique, has the potential to significantly improve the imaging quality of grazing-incidence X-ray optics. DC magnetron sputtering is used to selectively coat the mirror in order to minimize the figure deviations. Custom vacuum chambers have been developed at NASA MSFC that will enable the implementation of the deposition on X-ray optics. A factor of two improvement has been achieved in the angular resolution of the full-shell X-ray optics with first stage correction of differential deposition. Current efforts are focused on achieving higher improvements through efficient implementation of differential deposition.

  3. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  4. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  5. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  6. Nonleaky Population Transfer in a Transmon Qutrit via Largely-Detuned Drivings

    NASA Astrophysics Data System (ADS)

    Yan, Run-Ying; Feng, Zhi-Bo

    2018-06-01

    We propose an efficient scheme to implement nonleaky population transfer in a transmon qutrit via largely-detuned drivings. Due to weak level anharmonicity of the transmon system, the remarkable quantum leakages need to be considered in quantum coherent operations. Under the conditions of two-photon resonance and large detunings, the robust population transfer within a qutrit can be implemented via the technique of stimulated Raman adiabatic passage. Based on the accessible parameters, the feasible approach can remove the leakage error effectively, and then provides a potential approach for enhancing the transfer fidelity with transmon-regime artificial atoms experimentally.

  7. [The implementation of strategy of medicinal support in multi-type hospital].

    PubMed

    Ludupova, E Yu

    2016-01-01

    The article presents brief review of implementation of strategy of medicinal support of population of the Russian Federation and experience of application of at the level of regional hospital. The necessity and importance of implementation into practice of hospitals of methodology of pharmaco-economical management of medicinal care using modern technologies of XYZ-, ABC and VEN-analysis is demonstrated. The stages of development and implementation of process of medicinal support of multifield hospital applying principles of system of quality management (processing and systemic approaches, risk management) on the basis of standards ISO 9001 are described. The significance of monitoring of results ofprocess of medicinal support of the basis of implementation of priority target programs (prevention of venous thrombo-embolic complications, system od control of anti-bacterial therapy) are demonstrated in relation to multi-field hospital using technique of ATC/DDD-analysis for evaluating indices of effectiveness and efficiency.

  8. A hardware implementation of the discrete Pascal transform for image processing

    NASA Astrophysics Data System (ADS)

    Goodman, Thomas J.; Aburdene, Maurice F.

    2006-02-01

    The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.

  9. An efficient finite element technique for sound propagation in axisymmetric hard wall ducts carrying high subsonic Mach number flows

    NASA Technical Reports Server (NTRS)

    Tag, I. A.; Lumsdaine, E.

    1978-01-01

    The general non-linear three-dimensional equation for acoustic potential is derived by using a perturbation technique. The linearized axisymmetric equation is then solved by using a finite element algorithm based on the Galerkin formulation for a harmonic time dependence. The solution is carried out in complex number notation for the acoustic velocity potential. Linear, isoparametric, quadrilateral elements with non-uniform distribution across the duct section are implemented. The resultant global matrix is stored in banded form and solved by using a modified Gauss elimination technique. Sound pressure levels and acoustic velocities are calculated from post element solutions. Different duct geometries are analyzed and compared with experimental results.

  10. A Review of Recent Developments in X-Ray Diagnostics for Turbulent and Optically Dense Rocket Sprays

    NASA Technical Reports Server (NTRS)

    Radke, Christopher; Halls, Benjamin; Kastengren, Alan; Meyer, Terrence

    2017-01-01

    Highly efficient mixing and atomization of fuel and oxidizers is an important factor in many propulsion and power generating applications. To better quantify breakup and mixing in atomizing sprays, several diagnostic techniques have been developed to collect droplet information and spray statistics. Several optical based techniques, such as Ballistic Imaging and SLIPI have previously demonstrated qualitative measurements in optically dense sprays, however these techniques have produced limited quantitative information in the near injector region. To complement to these advances, a recent wave of developments utilizing synchrotron based x-rays have been successful been implemented facilitating the collection of quantitative measurements in optically dense sprays.

  11. Domestic water conservation potential in Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Abdulrazzak, Mohammed J.; Khan, Muhammad Z. A.

    1990-03-01

    Domestic water conservation in arid climates can result in efficient utilization of existing water supplies. The impacts of conservation measures such as the installation of water-saving devices, water metering and pricing schemes, water rationing and public awareness programs, strict plumbing codes, penalties for wasting water, programs designed to reduce leakage from public water lines and within the home, water-efficient landscaping, economic and ethical incentives are addressed in detail. Cost savings in arid climates, with particular reference to Saudi Arabia, in relation to some conservation techniques, are presented. Water conservation technology and tentative demonstration and implementation of water conservation programs are discussed.

  12. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    DOE PAGES

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...

    2013-07-18

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  13. Efficiency Analysis of Integrated Public Hospital Networks in Outpatient Internal Medicine.

    PubMed

    Ortíz-Barrios, Miguel Angel; Escorcia-Caballero, Juan P; Sánchez-Sánchez, Fabián; De Felice, Fabio; Petrillo, Antonella

    2017-09-07

    Healthcare systems are evolving towards a complex network of interconnected services due to the increasing costs and the increasing expectations for high service levels. It is evidenced in the literature the importance of implementing management techniques and sophisticated methods to improve the efficiency of healthcare systems, especially in emerging economies. This paper proposes an integrated collaboration model between two public hospitals to reach the reduction of weighted average lead time in outpatient internal medicine department. A strategic framework based on value stream mapping and collaborative practices has been developed in real case study settled in Colombia.

  14. Photochemical grid model implementation and application of ...

    EPA Pesticide Factsheets

    For the purposes of developing optimal emissions control strategies, efficient approaches are needed to identify the major sources or groups of sources that contribute to elevated ozone (O3) concentrations. Source-based apportionment techniques implemented in photochemical grid models track sources through the physical and chemical processes important to the formation and transport of air pollutants. Photochemical model source apportionment has been used to track source impacts of specific sources, groups of sources (sectors), sources in specific geographic areas, and stratospheric and lateral boundary inflow on O3. The implementation and application of a source apportionment technique for O3 and its precursors, nitrogen oxides (NOx) and volatile organic compounds (VOCs), for the Community Multiscale Air Quality (CMAQ) model are described here. The Integrated Source Apportionment Method (ISAM) O3 approach is a hybrid of source apportionment and source sensitivity in that O3 production is attributed to precursor sources based on O3 formation regime (e.g., for a NOx-sensitive regime, O3 is apportioned to participating NOx emissions). This implementation is illustrated by tracking multiple emissions source sectors and lateral boundary inflow. NOx, VOC, and O3 attribution to tracked sectors in the application are consistent with spatial and temporal patterns of precursor emissions. The O3 ISAM implementation is further evaluated through comparisons of apportioned am

  15. An Energy-Efficient Underground Localization System Based on Heterogeneous Wireless Networks

    PubMed Central

    Yuan, Yazhou; Chen, Cailian; Guan, Xinping; Yang, Qiuling

    2015-01-01

    A precision positioning system with energy efficiency is of great necessity for guaranteeing personnel safety in underground mines. The location information of the miners' should be transmitted to the control center timely and reliably; therefore, a heterogeneous network with the backbone based on high speed Industrial Ethernet is deployed. Since the mobile wireless nodes are working in an irregular tunnel, a specific wireless propagation model cannot fit all situations. In this paper, an underground localization system is designed to enable the adaptation to kinds of harsh tunnel environments, but also to reduce the energy consumption and thus prolong the lifetime of the network. Three key techniques are developed and implemented to improve the system performance, including a step counting algorithm with accelerometers, a power control algorithm and an adaptive packets scheduling scheme. The simulation study and experimental results show the effectiveness of the proposed algorithms and the implementation. PMID:26016918

  16. On-board processing concepts for future satellite communications systems

    NASA Technical Reports Server (NTRS)

    Brandon, W. T. (Editor); White, B. E. (Editor)

    1980-01-01

    The initial definition of on-board processing for an advanced satellite communications system to service domestic markets in the 1990's is discussed. An exemplar system with both RF on-board switching and demodulation/remodulation baseband processing is used to identify important issues related to system implementation, cost, and technology development. Analyses of spectrum-efficient modulation, coding, and system control techniques are summarized. Implementations for an RF switch and baseband processor are described. Among the major conclusions listed is the need for high gain satellites capable of handling tens of simultaneous beams for the efficient reuse of the 2.5 GHz 30/20 frequency band. Several scanning beams are recommended in addition to the fixed beams. Low power solid state 20 GHz GaAs FET power amplifiers in the 5W range and a general purpose digital baseband processor with gigahertz logic speeds and megabits of memory are also recommended.

  17. A low power, area efficient fpga based beamforming technique for 1-D CMUT arrays.

    PubMed

    Joseph, Bastin; Joseph, Jose; Vanjari, Siva Rama Krishna

    2015-08-01

    A low power area efficient digital beamformer targeting low frequency (2MHz) 1-D linear Capacitive Micromachined Ultrasonic Transducer (CMUT) array is developed. While designing the beamforming logic, the symmetry of the CMUT array is well exploited to reduce the area and power consumption. The proposed method is verified in Matlab by clocking an Arbitrary Waveform Generator(AWG). The architecture is successfully implemented in Xilinx Spartan 3E FPGA kit to check its functionality. The beamforming logic is implemented for 8, 16, 32, and 64 element CMUTs targeting Application Specific Integrated Circuit (ASIC) platform at Vdd 1.62V for UMC 90nm technology. It is observed that the proposed architecture consumes significantly lesser power and area (1.2895 mW power and 47134.4 μm(2) area for a 64 element digital beamforming circuit) compared to the conventional square root based algorithm.

  18. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  19. Highly efficient periodically poled KTP-isomorphs with large apertures and extreme domain aspect-ratios

    NASA Astrophysics Data System (ADS)

    Canalias, Carlota; Zukauskas, Andrius; Tjörnhamman, Staffan; Viotti, Anne-Lise; Pasiskevicius, Valdas; Laurell, Fredrik

    2018-02-01

    Since the early 1990's, a substantial effort has been devoted to the development of quasi-phased-matched (QPM) nonlinear devices, not only in ferroelectric oxides like LiNbO3, LiTaO3 and KTiOPO4 (KTP), but also in semiconductors as GaAs, and GaP. The technology to implement QPM structures in ferroelectric oxides has by now matured enough to satisfy the most basic frequency-conversion schemes without substantial modification of the poling procedures. Here, we present a qualitative leap in periodic poling techniques that allows us to demonstrate devices and frequency conversion schemes that were deemed unfeasible just a few years ago. Thanks to our short-pulse poling and coercive-field engineering techniques, we are able to demonstrate large aperture (5 mm) periodically poled Rb-doped KTP devices with a highly-uniform conversion efficiency over the whole aperture. These devices allow parametric conversion with energies larger than 60 mJ. Moreover, by employing our coercive-field engineering technique we fabricate highlyefficient sub-µm periodically poled devices, with periodicities as short as 500 nm, uniform over 1 mm-thick crystals, which allow us to realize mirrorless optical parametric oscillators with counter-propagating signal and idler waves. These novel devices present unique spectral and tuning properties, superior to those of conventional OPOs. Furthermore, our techniques are compatible with KTA, a KTP isomorph with extended transparency in the mid-IR range. We demonstrate that our highly-efficient PPKTA is superior both for mid-IR and for green light generation - as a result of improved transmission properties in the visible range. Our KTP-isomorph poling techniques leading to highly-efficient QPM devices will be presented. Their optical performance and attractive damage thresholds will be discussed.

  20. Current-mode subthreshold MOS implementation of the Herault-Jutten autoadaptive network

    NASA Astrophysics Data System (ADS)

    Cohen, Marc H.; Andreou, Andreas G.

    1992-05-01

    The translinear circuits in subthreshold MOS technology and current-mode design techniques for the implementation of neuromorphic analog network processing are investigated. The architecture, also known as the Herault-Jutten network, performs an independent component analysis and is essentially a continuous-time recursive linear adaptive filter. Analog I/O interface, weight coefficients, and adaptation blocks are all integrated on the chip. A small network with six neurons and 30 synapses was fabricated in a 2-microns n-well double-polysilicon, double-metal CMOS process. Circuit designs at the transistor level yield area-efficient implementations for neurons, synapses, and the adaptation blocks. The design methodology and constraints as well as test results from the fabricated chips are discussed.

  1. Scaling Support Vector Machines On Modern HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2015-02-01

    We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.

  2. The Strategic Thinking Process: Efficient Mobilization of Human Resources for System Definition

    PubMed Central

    Covvey, H. D.

    1987-01-01

    This paper describes the application of several group management techniques to the creation of needs specifications and information systems strategic plans in health care institutions. The overall process is called the “Strategic Thinking Process”. It is a formal methodology that can reduce the time and cost of creating key documents essential for the successful implementation of health care information systems.

  3. Same Planet, Different Worlds: Why Projects Continue to Fail. A Generalist Review of Project Management with Special Reference to Electronic Research Administration

    ERIC Educational Resources Information Center

    McCormick, Ian

    2006-01-01

    Implementation of IT "solutions" in the context of changes to business processes and efficiency is a common trigger for using formalised project management techniques. The trigger may include topical activities such as job evaluation schemes or quality assurance accreditation. This has led to a blurring of the boundary between projects and…

  4. Development of AN Innovative Three-Dimensional Complete Body Screening Device - 3D-CBS

    NASA Astrophysics Data System (ADS)

    Crosetto, D. B.

    2004-07-01

    This article describes an innovative technological approach that increases the efficiency with which a large number of particles (photons) can be detected and analyzed. The three-dimensional complete body screening (3D-CBS) combines the functional imaging capability of the Positron Emission Tomography (PET) with those of the anatomical imaging capability of Computed Tomography (CT). The novel techniques provide better images in a shorter time with less radiation to the patient. A primary means of accomplishing this is the use of a larger solid angle, but this requires a new electronic technique capable of handling the increased data rate. This technique, combined with an improved and simplified detector assembly, enables executing complex real-time algorithms and allows more efficiently use of economical crystals. These are the principal features of this invention. A good synergy of advanced techniques in particle detection, together with technological progress in industry (latest FPGA technology) and simple, but cost-effective ideas provide a revolutionary invention. This technology enables over 400 times PET efficiency improvement at once compared to two to three times improvements achieved every five years during the past decades. Details of the electronics are provided, including an IBM PC board with a parallel-processing architecture implemented in FPGA, enabling the execution of a programmable complex real-time algorithm for best detection of photons.

  5. Object tracking on mobile devices using binary descriptors

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton

    2015-03-01

    With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.

  6. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  7. An FPGA-Based People Detection System

    NASA Astrophysics Data System (ADS)

    Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.

    2005-12-01

    This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.

  8. Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw

    2000-01-01

    Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.

  9. Symbolic manipulation techniques for vibration analysis of laminated elliptic plates

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Noor, A. K.

    1977-01-01

    A computational scheme is presented for the free vibration analysis of laminated composite elliptic plates. The scheme is based on Hamilton's principle, the Rayleigh-Ritz technique and symmetry considerations and is implemented with the aid of the MACSYMA symbolic manipulation system. The MACYSMA system, through differentiation, integration, and simplification of analytic expressions, produces highly-efficient FORTRAN code for the evaluation of the stiffness and mass coefficients. Multiple use is made of this code to obtain not only the frequencies and mode shapes of the plate, but also the derivatives of the frequencies with respect to various material and geometric parameters.

  10. Perspective: Rapid synthesis of complex oxides by combinatorial molecular beam epitaxy

    DOE PAGES

    A. T. Bollinger; Wu, J.; Bozovic, I.

    2016-03-15

    In this study, the molecular beam epitaxy(MBE) technique is well known for producing atomically smooth thin films as well as impeccable interfaces in multilayers of many different materials. In particular, molecular beam epitaxy is well suited to the growth of complex oxides, materials that hold promise for many applications. Rapid synthesis and high throughput characterization techniques are needed to tap into that potential most efficiently. We discuss our approach to doing that, leaving behind the traditional one-growth-one-compound scheme and instead implementing combinatorial oxide molecular beam epitaxy in a custom built system.

  11. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  12. Parallel computing techniques for rotorcraft aerodynamics

    NASA Astrophysics Data System (ADS)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  13. Efficient implementation of core-excitation Bethe-Salpeter equation calculations

    NASA Astrophysics Data System (ADS)

    Gilmore, K.; Vinson, John; Shirley, E. L.; Prendergast, D.; Pemmaraju, C. D.; Kas, J. J.; Vila, F. D.; Rehr, J. J.

    2015-12-01

    We present an efficient implementation of the Bethe-Salpeter equation (BSE) method for obtaining core-level spectra including X-ray absorption (XAS), X-ray emission (XES), and both resonant and non-resonant inelastic X-ray scattering spectra (N/RIXS). Calculations are based on density functional theory (DFT) electronic structures generated either by ABINIT or QuantumESPRESSO, both plane-wave basis, pseudopotential codes. This electronic structure is improved through the inclusion of a GW self energy. The projector augmented wave technique is used to evaluate transition matrix elements between core-level and band states. Final two-particle scattering states are obtained with the NIST core-level BSE solver (NBSE). We have previously reported this implementation, which we refer to as OCEAN (Obtaining Core Excitations from Ab initio electronic structure and NBSE) (Vinson et al., 2011). Here, we present additional efficiencies that enable us to evaluate spectra for systems ten times larger than previously possible; containing up to a few thousand electrons. These improvements include the implementation of optimal basis functions that reduce the cost of the initial DFT calculations, more complete parallelization of the screening calculation and of the action of the BSE Hamiltonian, and various memory reductions. Scaling is demonstrated on supercells of SrTiO3 and example spectra for the organic light emitting molecule Tris-(8-hydroxyquinoline)aluminum (Alq3) are presented. The ability to perform large-scale spectral calculations is particularly advantageous for investigating dilute or non-periodic systems such as doped materials, amorphous systems, or complex nano-structures.

  14. Efficiency of different methods of extra-cavity second harmonic generation of continuous wave single-frequency radiation.

    PubMed

    Khripunov, Sergey; Kobtsev, Sergey; Radnatarov, Daba

    2016-01-20

    This work presents for the first time to the best of our knowledge a comparative efficiency analysis among various techniques of extra-cavity second harmonic generation (SHG) of continuous-wave single-frequency radiation in nonperiodically poled nonlinear crystals within a broad range of power levels. Efficiency of nonlinear radiation transformation at powers from 1 W to 10 kW was studied in three different configurations: with an external power-enhancement cavity and without the cavity in the case of single and double radiation pass through a nonlinear crystal. It is demonstrated that at power levels exceeding 1 kW, the efficiencies of methods with and without external power-enhancement cavities become comparable, whereas at even higher powers, SHG by a single or double pass through a nonlinear crystal becomes preferable because of the relatively high efficiency of nonlinear transformation and fairly simple implementation.

  15. Numerical simulation of coupled electrochemical and transport processes in battery systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, B.Y.; Gu, W.B.; Wang, C.Y.

    1997-12-31

    Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less

  16. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  17. Application of parallel distributed Lagrange multiplier technique to simulate coupled Fluid-Granular flows in pipes with varying Cross-Sectional area

    DOE PAGES

    Kanarska, Yuliya; Walton, Otis

    2015-11-30

    Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less

  18. A 3D finite-difference BiCG iterative solver with the Fourier-Jacobi preconditioner for the anisotropic EIT/EEG forward problem.

    PubMed

    Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D

    2014-01-01

    The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.

  19. BFPTool: a software tool for analysis of Biomembrane Force Probe experiments.

    PubMed

    Šmít, Daniel; Fouquet, Coralie; Doulazmi, Mohamed; Pincet, Frédéric; Trembleau, Alain; Zapotocky, Martin

    2017-01-01

    The Biomembrane Force Probe is an approachable experimental technique commonly used for single-molecule force spectroscopy and experiments on biological interfaces. The technique operates in the range of forces from 0.1 pN to 1000 pN. Experiments are typically repeated many times, conditions are often not optimal, the captured video can be unstable and lose focus; this makes efficient analysis challenging, while out-of-the-box non-proprietary solutions are not freely available. This dedicated tool was developed to integrate and simplify the image processing and analysis of videomicroscopy recordings from BFP experiments. A novel processing feature, allowing the tracking of the pipette, was incorporated to address a limitation of preceding methods. Emphasis was placed on versatility and comprehensible user interface implemented in a graphical form. An integrated analytical tool was implemented to provide a faster, simpler and more convenient way to process and analyse BFP experiments.

  20. Development and Evaluation of a Performance Modeling Flight Test Approach Based on Quasi Steady-State Maneuvers

    NASA Technical Reports Server (NTRS)

    Yechout, T. R.; Braman, K. B.

    1984-01-01

    The development, implementation and flight test evaluation of a performance modeling technique which required a limited amount of quasisteady state flight test data to predict the overall one g performance characteristics of an aircraft. The concept definition phase of the program include development of: (1) the relationship for defining aerodynamic characteristics from quasi steady state maneuvers; (2) a simplified in flight thrust and airflow prediction technique; (3) a flight test maneuvering sequence which efficiently provided definition of baseline aerodynamic and engine characteristics including power effects on lift and drag; and (4) the algorithms necessary for cruise and flight trajectory predictions. Implementation of the concept include design of the overall flight test data flow, definition of instrumentation system and ground test requirements, development and verification of all applicable software and consolidation of the overall requirements in a flight test plan.

  1. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  2. Issues regarding the usage of MPPT techniques in micro grid systems

    NASA Astrophysics Data System (ADS)

    Szeidert, I.; Filip, I.; Dragan, F.; Gal, A.

    2018-01-01

    The main objective of the control strategies applied at hybrid micro grid systems (wind/hydro/solar), that function based on maximum power point tracking (MPPT) techniques is to improve the conversion system’s efficiency and to preserve the quality of the generated electrical energy (voltage and power factor). One of the main goals of maximum power point tracking strategy is to achieve the harvesting of the maximal possible energy within a certain time period. In order to implement the control strategies for micro grid, there are typically required specific transducers (sensor for wind speed, optical rotational transducers, etc.). In the technical literature, several variants of the MPPT techniques are presented and particularized at some applications (wind energy conversion systems, solar systems, hydro plants, micro grid hybrid systems). The maximum power point tracking implementations are mainly based on two-level architecture. The lower level controls the main variable and the superior level represents the MPPT control structure. The paper presents micro grid structures developed at Politehnica University Timisoara (PUT) within the frame of a research grant. The paper is focused on the application of MPPT strategies on hybrid micro grid systems. There are presented several structures and control strategies and are highlighted their advantages and disadvantages, together with practical implementation guidelines.

  3. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  4. A Case for Application Oblivious Energy-Efficient MPI Runtime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatesh, Akshay; Vishnu, Abhinav; Hamidouche, Khaled

    Power has become the major impediment in designing large scale high-end systems. Message Passing Interface (MPI) is the {\\em de facto} communication interface used as the back-end for designing applications, programming models and runtime for these systems. Slack --- the time spent by an MPI process in a single MPI call --- provides a potential for energy and power savings, if an appropriate power reduction technique such as core-idling/Dynamic Voltage and Frequency Scaling (DVFS) can be applied without perturbing application's execution time. Existing techniques that exploit slack for power savings assume that application behavior repeats across iterations/executions. However, an increasingmore » use of adaptive, data-dependent workloads combined with system factors (OS noise, congestion) makes this assumption invalid. This paper proposes and implements Energy Aware MPI (EAM) --- an application-oblivious energy-efficient MPI runtime. EAM uses a combination of communication models of common MPI primitives (point-to-point, collective, progress, blocking/non-blocking) and an online observation of slack for maximizing energy efficiency. Each power lever incurs time overhead, which must be amortized over slack to minimize degradation. When predicted communication time exceeds a lever overhead, the lever is used {\\em as soon as possible} --- to maximize energy efficiency. When mis-prediction occurs, the lever(s) are used automatically at specific intervals for amortization. We implement EAM using MVAPICH2 and evaluate it on ten applications using up to 4096 processes. Our performance evaluation on an InfiniBand cluster indicates that EAM can reduce energy consumption by 5--41\\% in comparison to the default approach, with negligible (less than 4\\% in all cases) performance loss.« less

  5. Adaptive gain, equalization, and wavelength stabilization techniques for silicon photonic microring resonator-based optical receivers

    NASA Astrophysics Data System (ADS)

    Palermo, Samuel; Chiang, Patrick; Yu, Kunzhi; Bai, Rui; Li, Cheng; Chen, Chin-Hui; Fiorentino, Marco; Beausoleil, Ray; Li, Hao; Shafik, Ayman; Titriku, Alex

    2016-03-01

    Interconnect architectures based on high-Q silicon photonic microring resonator devices offer a promising solution to address the dramatic increase in datacenter I/O bandwidth demands due to their ability to realize wavelength-division multiplexing (WDM) in a compact and energy efficient manner. However, challenges exist in realizing efficient receivers for these systems due to varying per-channel link budgets, sensitivity requirements, and ring resonance wavelength shifts. This paper reports on adaptive optical receiver design techniques which address these issues and have been demonstrated in two hybrid-integrated prototypes based on microring drop filters and waveguide photodetectors implemented in a 130nm SOI process and high-speed optical front-ends designed in 65nm CMOS. A 10Gb/s powerscalable architecture employs supply voltage scaling of a three inverter-stage transimpedance amplifier (TIA) that is adapted with an eye-monitor control loop to yield the necessary sensitivity for a given channel. As reduction of TIA input-referred noise is more critical at higher data rates, a 25Gb/s design utilizes a large input-stage feedback resistor TIA cascaded with a continuous-time linear equalizer (CTLE) that compensates for the increased input pole. When tested with a waveguide Ge PD with 0.45A/W responsivity, this topology achieves 25Gb/s operation with -8.2dBm sensitivity at a BER=10-12. In order to address microring drop filters sensitivity to fabrication tolerances and thermal variations, efficient wavelength-stabilization control loops are necessary. A peak-power-based monitoring loop which locks the drop filter to the input wavelength, while achieving compatibility with the high-speed TIA offset-correction feedback loop is implemented with a 0.7nm tuning range at 43μW/GHz efficiency.

  6. Efficient privacy-preserving string search and an application in genomics

    PubMed Central

    Shimizu, Kana; Nuida, Koji; Rätsch, Gunnar

    2016-01-01

    Motivation: Personal genomes carry inherent privacy risks and protecting privacy poses major social and technological challenges. We consider the case where a user searches for genetic information (e.g. an allele) on a server that stores a large genomic database and aims to receive allele-associated information. The user would like to keep the query and result private and the server the database. Approach: We propose a novel approach that combines efficient string data structures such as the Burrows–Wheeler transform with cryptographic techniques based on additive homomorphic encryption. We assume that the sequence data is searchable in efficient iterative query operations over a large indexed dictionary, for instance, from large genome collections and employing the (positional) Burrows–Wheeler transform. We use a technique called oblivious transfer that is based on additive homomorphic encryption to conceal the sequence query and the genomic region of interest in positional queries. Results: We designed and implemented an efficient algorithm for searching sequences of SNPs in large genome databases. During search, the user can only identify the longest match while the server does not learn which sequence of SNPs the user queried. In an experiment based on 2184 aligned haploid genomes from the 1000 Genomes Project, our algorithm was able to perform typical queries within ≈ 4.6 s and ≈ 10.8 s for client and server side, respectively, on laptop computers. The presented algorithm is at least one order of magnitude faster than an exhaustive baseline algorithm. Availability and implementation: https://github.com/iskana/PBWT-sec and https://github.com/ratschlab/PBWT-sec. Contacts: shimizu-kana@aist.go.jp or Gunnar.Ratsch@ratschlab.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153731

  7. Steam generator tubing NDE performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, G.; Welty, C.S. Jr.

    1997-02-01

    Steam generator (SG) non-destructive examination (NDE) is a fundamental element in the broader SG in-service inspection (ISI) process, a cornerstone in the management of PWR steam generators. Based on objective performance measures (tube leak forced outages and SG-related capacity factor loss), ISI performance has shown a continually improving trend over the years. Performance of the NDE element is a function of the fundamental capability of the technique, and the ability of the analysis portion of the process in field implementation of the technique. The technology continues to improve in several areas, e.g. system sensitivity, data collection rates, probe/coil design, andmore » data analysis software. With these improvements comes the attendant requirement for qualification of the technique on the damage form(s) to which it will be applied, and for training and qualification of the data analysis element of the ISI process on the field implementation of the technique. The introduction of data transfer via fiber optic line allows for remote data acquisition and analysis, thus improving the efficiency of analysis for a limited pool of data analysts. This paper provides an overview of the current status of SG NDE, and identifies several important issues to be addressed.« less

  8. Closed-form Static Analysis with Inertia Relief and Displacement-Dependent Loads Using a MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1995-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is commonly performed throughout the aerospace industry. Many times, these problems are solved using static analysis with inertia relief. This solution technique allows for a free-free static analysis by balancing the applied loads with inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus displacement-dependent loads. Solving for the final displacements of such systems is commonly performed using iterative solution techniques. Unfortunately, these techniques can be time-consuming and labor-intensive. Since the coupled system equations for free-free systems with displacement-dependent loads can be written in closed-form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. Using a MSC/NASTRAN DMAP Alter, displacement-dependent loads have been included in static analysis with inertia relief. Such an Alter has been used successfully to solve efficiently a common aerospace problem typically solved using an iterative technique.

  9. A robust adaptive flightpath reconstruction technique

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1986-01-01

    Computational schemes are presented that allow accurate reconstruction of an aircraft's flightpath in real-time. The reconstruction of the flightpath is formulated as a linear state reconstruction problem, which can be solved via Kalman filtering (KF) techniques. This imposes some conditions upon the flight-test equipment. A reliable square root covariance KF (SRCF) implementation is chosen and further developed into a fully adaptive flightpath reconstruction scheme. Therefore, the basic SRCF is modified in order to cope with several practical problems such as: the automatic control of the convergence of the recursive KF calculations, time varying zero-bias errors on the input signal of the system model used in the KF, and the changing aircraft dynamics owing to a change in reference flight condition. The developed solutions for these problems are all implemented in a numerically stable way, which guarantees the overall flightpath reconstruction scheme to be robust. Furthermore, some special features of the used system model are exploited to make the algorithmic implementation very efficient. An experimental simulation study using simulated flight test data demonstrated these different capabilities.

  10. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  11. Refractory metals for ARPS AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svedberg, R.C.; Sievers, R.C.

    1998-07-01

    Alkali Metal Thermal-to-Electric Converter (AMTEC) cells for the Advanced Radioisotope Power Systems (ARPS) program are being developed with refractory metals and alloys as the basic structural materials. AMTEC cell efficiency increases with cell operating temperature. For space applications, long term reliability and high efficiency are essential and refractory metals were selected because of their high temperature strength, low vapor pressure, and compatibility with sodium. However, refractory metals are sensitive to oxygen, nitrogen and hydrogen contamination and refractory metal cells cannot be processed in air. Because of this sensitivity, new manufacturing and processing techniques are being developed. In addition to structuralmore » elements, development of other refractory metal components for the AMTEC cells, such as the artery and evaporator wicks, pinchoff tubes and feedthroughs are required. Changes in cell fabrication techniques and processing procedures being implemented to manufacture refractory metal cells are discussed.« less

  12. Noise reduction in optically controlled quantum memory

    NASA Astrophysics Data System (ADS)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  13. Multiplexed genome engineering and genotyping methods applications for synthetic biology and metabolic engineering.

    PubMed

    Wang, Harris H; Church, George M

    2011-01-01

    Engineering at the scale of whole genomes requires fundamentally new molecular biology tools. Recent advances in recombineering using synthetic oligonucleotides enable the rapid generation of mutants at high efficiency and specificity and can be implemented at the genome scale. With these techniques, libraries of mutants can be generated, from which individuals with functionally useful phenotypes can be isolated. Furthermore, populations of cells can be evolved in situ by directed evolution using complex pools of oligonucleotides. Here, we discuss ways to utilize these multiplexed genome engineering methods, with special emphasis on experimental design and implementation. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Coupling efficiency of laser beam to multimode fiber for free space optical communication

    NASA Astrophysics Data System (ADS)

    Arisa, Suguru; Takayama, Yoshihisa; Endo, Hiroyuki; Shimizu, Ryosuke; Fujiwara, Mikio; Sasaki, Masahide

    2017-11-01

    Recently, the free space optical (FSO) communications have been widely studied as an alternative for large capacity communications and its possible implementation in satellite and terrestrial laser links. In satellite communications, clouds can strongly attenuate the laser signal that would lead to high bit-error rates or temporal unavailability of the link. To overcome the cloud coverage effects, often site diversity technique is implemented. When using multiple ground stations though, simplified optical system is required to allow the usage of more flexible approaches. In terrestrial laser communications, several methods for optical system simplification by using a multimode fiber (MMF) have been proposed.

  15. A single chip VLSI Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.

    1986-01-01

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.

  16. MSIX - A general and user-friendly platform for RAM analysis

    NASA Astrophysics Data System (ADS)

    Pan, Z. J.; Blemel, Peter

    The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.

  17. Current and Future Applications of Machine Learning for the US Army

    DTIC Science & Technology

    2018-04-13

    designing from the unwieldy application of the first principles of flight controls, aerodynamics, blade propulsion, and so on, the designers turned...when the number of features runs into millions can become challenging. To overcome these issues, regularization techniques have been developed which...and compiled to run efficiently on either CPU or GPU architectures. 5) Keras63 is a library that contains numerous implementations of commonly used

  18. Using multi-level remote sensing and ground data to estimate forest biomass resources in remote regions: a case study in the boreal forests of interior Alaska

    Treesearch

    Hans-Erik Andersen; Strunk Jacob; Hailemariam Temesgen; Donald Atwood; Ken Winterberger

    2012-01-01

    The emergence of a new generation of remote sensing and geopositioning technologies, as well as increased capabilities in image processing, computing, and inferential techniques, have enabled the development and implementation of increasingly efficient and cost-effective multilevel sampling designs for forest inventory. In this paper, we (i) describe the conceptual...

  19. Apparatus for high flux photocatalytic pollution control using a rotating fluidized bed reactor

    DOEpatents

    Tabatabaie-Raissi, Ali; Muradov, Nazim Z.; Martin, Eric

    2003-06-24

    An apparatus based on optimizing photoprocess energetics by decoupling of the process energy efficiency from the DRE for target contaminants. The technique is applicable to both low- and high-flux photoreactor design and scale-up. An apparatus for high-flux photocatalytic pollution control is based on the implementation of multifunctional metal oxide aerogels and other media in conjunction with a novel rotating fluidized particle bed reactor.

  20. The experience sampling method: Investigating students' affective experience

    NASA Astrophysics Data System (ADS)

    Nissen, Jayson M.; Stetzer, MacKenzie R.; Shemwell, Jonathan T.

    2013-01-01

    Improving non-cognitive outcomes such as attitudes, efficacy, and persistence in physics courses is an important goal of physics education. This investigation implemented an in-the-moment surveying technique called the Experience Sampling Method (ESM) [1] to measure students' affective experience in physics. Measurements included: self-efficacy, cognitive efficiency, activation, intrinsic motivation, and affect. Data are presented that show contrasts in students' experiences (e.g., in physics vs. non-physics courses).

  1. Solar Trees: First Large-Scale Demonstration of Fully Solution Coated, Semitransparent, Flexible Organic Photovoltaic Modules.

    PubMed

    Berny, Stephane; Blouin, Nicolas; Distler, Andreas; Egelhaaf, Hans-Joachim; Krompiec, Michal; Lohr, Andreas; Lozman, Owen R; Morse, Graham E; Nanson, Lana; Pron, Agnieszka; Sauermann, Tobias; Seidler, Nico; Tierney, Steve; Tiwana, Priti; Wagner, Michael; Wilson, Henry

    2016-05-01

    The technology behind a large area array of flexible solar cells with a unique design and semitransparent blue appearance is presented. These modules are implemented in a solar tree installation at the German pavilion in the EXPO2015 in Milan/IT. The modules show power conversion efficiencies of 4.5% and are produced exclusively using standard printing techniques for large-scale production.

  2. Energy-saving drying and its application

    NASA Astrophysics Data System (ADS)

    Kovbasyuk, V. I.

    2015-09-01

    Superheated steam is efficiently applied as a coolant for the intensification of drying, which is an important component of many up-to-date technologies. However, traditional drying is extremely energy consuming, and many drying apparatus are environmentally unfriendly. Thus, it is important to implement the proposed drying technique using superheated steam under pressure significantly higher than the atmospheric one with subsequent steam transfer for use in a turbine for electric power generation as a compensation of energy costs for drying. This paper includes a brief thermodynamic analysis of such a technique, its environmental advantages, and possible benefits of the use of wet wastes and obtaining high-quality fuels from wet raw materials. A scheme is developed for the turbine protection from impurities that can occur in the steam at drying. Potential advantage of the technique are also the absence of heating surfaces that are in contact with wet media, the absence of the emissions to the atmosphere, and the use of low potential heat for desalination and the purification of water. The new drying technique can play an extremely important part in the implementation in the field of thermal destruction of anthropogenic wastes. In spite of the promotion of waste sorting to obtain valuable secondary raw materials, the main problem of big cities is nonutilizable waste, which makes not less than 85% of the starting quantity of waste. This can only be totally solved by combustion, which even more relates to the sewage sludge utilization. The wastes can be safely and efficiently combusted only provided that they are free of moisture. Combustion temperature optimization makes possible full destruction of dioxins and their toxic analogues.

  3. Comment on "An Efficient and Stable Hydrodynamic Model With Novel Source Term Discretization Schemes for Overland Flow and Flood Simulations" by Xilin Xia et al.

    NASA Astrophysics Data System (ADS)

    Lu, Xinhua; Mao, Bing; Dong, Bingjiang

    2018-01-01

    Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.

  4. Circulating tumor cell detection: A direct comparison between negative and unbiased enrichment in lung cancer.

    PubMed

    Xu, Yan; Liu, Biao; Ding, Fengan; Zhou, Xiaodie; Tu, Pin; Yu, Bo; He, Yan; Huang, Peilin

    2017-06-01

    Circulating tumor cells (CTCs), isolated as a 'liquid biopsy', may provide important diagnostic and prognostic information. Therefore, rapid, reliable and unbiased detection of CTCs are required for routine clinical analyses. It was demonstrated that negative enrichment, an epithelial marker-independent technique for isolating CTCs, exhibits a better efficiency in the detection of CTCs compared with positive enrichment techniques that only use specific anti-epithelial cell adhesion molecules. However, negative enrichment techniques incur significant cell loss during the isolation procedure, and as it is a method that uses only one type of antibody, it is inherently biased. The detection procedure and identification of cell types also relies on skilled and experienced technicians. In the present study, the detection sensitivity of using negative enrichment and a previously described unbiased detection method was compared. The results revealed that unbiased detection methods may efficiently detect >90% of cancer cells in blood samples containing CTCs. By contrast, only 40-60% of CTCs were detected by negative enrichment. Additionally, CTCs were identified in >65% of patients with stage I/II lung cancer. This simple yet efficient approach may achieve a high level of sensitivity. It demonstrates a potential for the large-scale clinical implementation of CTC-based diagnostic and prognostic strategies.

  5. Hospitals Productivity Measurement Using Data Envelopment Analysis Technique.

    PubMed

    Torabipour, Amin; Najarzadeh, Maryam; Arab, Mohammad; Farzianpour, Freshteh; Ghasemzadeh, Roya

    2014-11-01

    This study aimed to measure the hospital productivity using data envelopment analysis (DEA) technique and Malmquist indices. This is a cross sectional study in which the panel data were used in a 4 year period from 2007 to 2010. The research was implemented in 12 teaching and non-teaching hospitals of Ahvaz County. Data envelopment analysis technique and the Malmquist indices with an input-orientation approach, was used to analyze the data and estimation of productivity. Data were analyzed using the SPSS.18 and DEAP.2 software. Six hospitals (50%) had a value lower than 1, which represents an increase in total productivity and other hospitals were non-productive. the average of total productivity factor (TPF) was 1.024 for all hospitals, which represents a decrease in efficiency by 2.4% from 2007 to 2010. The average technical, technologic, scale and managerial efficiency change was 0.989, 1.008, 1.028, and 0.996 respectively. There was not a significant difference in mean productivity changes among teaching and non-teaching hospitals (P>0.05) (except in 2009 years). Productivity rate of hospitals had an increasing trend generally. However, the total average of productivity was decreased in hospitals. Besides, between the several components of total productivity, variation of technological efficiency had the highest impact on reduce of total average of productivity.

  6. Pyroelectric Energy Scavenging Techniques for Self-Powered Nuclear Reactor Wireless Sensor Networks

    DOE PAGES

    Hunter, Scott Robert; Lavrik, Nickolay V; Datskos, Panos G; ...

    2014-11-01

    Recent advances in technologies for harvesting waste thermal energy from ambient environments present an opportunity to implement truly wireless sensor nodes in nuclear power plants. These sensors could continue to operate during extended station blackouts and during periods when operation of the plant s internal power distribution system has been disrupted. The energy required to power the wireless sensors must be generated using energy harvesting techniques from locally available energy sources, and the energy consumption within the sensor circuitry must therefore be low to minimize power and hence the size requirements of the energy harvester. Harvesting electrical energy from thermalmore » energy sources can be achieved using pyroelectric or thermoelectric conversion techniques. Recent modeling and experimental studies have shown that pyroelectric techniques can be cost competitive with thermoelectrics in self powered wireless sensor applications and, using new temperature cycling techniques, has the potential to be several times as efficient as thermoelectrics under comparable operating conditions. The development of a new thermal energy harvester concept, based on temperature cycled pyroelectric thermal-to-electrical energy conversion, is outlined. This paper outlines the modeling of cantilever and pyroelectric structures and single element devices that demonstrate the potential of this technology for the development of high efficiency thermal-to-electrical energy conversion devices.« less

  7. Pyroelectric Energy Scavenging Techniques for Self-Powered Nuclear Reactor Wireless Sensor Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, Scott Robert; Lavrik, Nickolay V; Datskos, Panos G

    Recent advances in technologies for harvesting waste thermal energy from ambient environments present an opportunity to implement truly wireless sensor nodes in nuclear power plants. These sensors could continue to operate during extended station blackouts and during periods when operation of the plant s internal power distribution system has been disrupted. The energy required to power the wireless sensors must be generated using energy harvesting techniques from locally available energy sources, and the energy consumption within the sensor circuitry must therefore be low to minimize power and hence the size requirements of the energy harvester. Harvesting electrical energy from thermalmore » energy sources can be achieved using pyroelectric or thermoelectric conversion techniques. Recent modeling and experimental studies have shown that pyroelectric techniques can be cost competitive with thermoelectrics in self powered wireless sensor applications and, using new temperature cycling techniques, has the potential to be several times as efficient as thermoelectrics under comparable operating conditions. The development of a new thermal energy harvester concept, based on temperature cycled pyroelectric thermal-to-electrical energy conversion, is outlined. This paper outlines the modeling of cantilever and pyroelectric structures and single element devices that demonstrate the potential of this technology for the development of high efficiency thermal-to-electrical energy conversion devices.« less

  8. Review of combined isotopic and optical nanoscopy

    PubMed Central

    Richter, Katharina N.; Rizzoli, Silvio O.; Jähne, Sebastian; Vogts, Angela; Lovric, Jelena

    2017-01-01

    Abstract. Investigating the detailed substructure of the cell is beyond the ability of conventional optical microscopy. Electron microscopy, therefore, has been the only option for such studies for several decades. The recent implementation of several super-resolution optical microscopy techniques has rendered the investigation of cellular substructure easier and more efficient. Nevertheless, optical microscopy only provides an image of the present structure of the cell, without any information on its long-temporal changes. These can be investigated by combining super-resolution optics with a nonoptical imaging technique, nanoscale secondary ion mass spectrometry, which investigates the isotopic composition of the samples. The resulting technique, combined isotopic and optical nanoscopy, enables the investigation of both the structure and the “history” of the cellular elements. The age and the turnover of cellular organelles can be read by isotopic imaging, while the structure can be analyzed by optical (fluorescence) approaches. We present these technologies, and we discuss their implementation for the study of biological samples. We conclude that, albeit complex, this type of technology is reliable enough for mass application to cell biology. PMID:28466025

  9. Surgical virtual reality - highlights in developing a high performance surgical haptic device.

    PubMed

    Custură-Crăciun, D; Cochior, D; Constantinoiu, S; Neagu, C

    2013-01-01

    Just like simulators are a standard in aviation and aerospace sciences, we expect for surgical simulators to soon become a standard in medical applications. These will correctly instruct future doctors in surgical techniques without there being a need for hands on patient instruction. Using virtual reality by digitally transposing surgical procedures changes surgery in are volutionary manner by offering possibilities for implementing new, much more efficient, learning methods, by allowing the practice of new surgical techniques and by improving surgeon abilities and skills. Perfecting haptic devices has opened the door to a series of opportunities in the fields of research,industry, nuclear science and medicine. Concepts purely theoretical at first, such as telerobotics, telepresence or telerepresentation,have become a practical reality as calculus techniques, telecommunications and haptic devices evolved,virtual reality taking a new leap. In the field of surgery barrier sand controversies still remain, regarding implementation and generalization of surgical virtual simulators. These obstacles remain connected to the high costs of this yet fully sufficiently developed technology, especially in the domain of haptic devices. Celsius.

  10. Agents Based e-Commerce and Securing Exchanged Information

    NASA Astrophysics Data System (ADS)

    Al-Jaljouli, Raja; Abawajy, Jemal

    Mobile agents have been implemented in e-Commerce to search and filter information of interest from electronic markets. When the information is very sensitive and critical, it is important to develop a novel security protocol that can efficiently protect the information from malicious tampering as well as unauthorized disclosure or at least detect any malicious act of intruders. In this chapter, we describe robust security techniques that ensure a sound security of information gathered throughout agent’s itinerary against various security attacks, as well as truncation attacks. A sound security protocol is described, which implements the various security techniques that would jointly prevent or at least detect any malicious act of intruders. We reason about the soundness of the protocol usingSymbolic Trace Analyzer (STA), a formal verification tool that is based on symbolic techniques. We analyze the protocol in key configurations and show that it is free of flaws. We also show that the protocol fulfils the various security requirements of exchanged information in MAS, including data-integrity, data-confidentiality, data-authenticity, origin confidentiality and data non-repudiability.

  11. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  12. Microfabrication of low-loss lumped-element Josephson circuits for non-reciprocal and parametric devices

    NASA Astrophysics Data System (ADS)

    Cicak, Katarina; Lecocq, Florent; Ranzani, Leonardo; Peterson, Gabriel A.; Kotler, Shlomi; Teufel, John D.; Simmonds, Raymond W.; Aumentado, Jose

    Recent developments in coupled mode theory have opened the doors to new nonreciprocal amplification techniques that can be directly leveraged to produce high quantum efficiency in current measurements in microwave quantum information. However, taking advantage of these techniques requires flexible multi-mode circuit designs comprised of low-loss materials that can be implemented using common fabrication techniques. In this talk we discuss the design and fabrication of a new class of multi-pole lumped-element superconducting parametric amplifiers based on Nb/Al-AlOx/Nb Josephson junctions on silicon or sapphire. To reduce intrinsic loss in these circuits we utilize PECVD amorphous silicon as a low-loss dielectric (tanδ 5 ×10-4), resulting in nearly quantum-limited directional amplification.

  13. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  14. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data

    PubMed Central

    Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar

    2017-01-01

    A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions. PMID:28402332

  15. Speeding Up Ecological and Evolutionary Computations in R; Essentials of High Performance Computing for Biologists

    PubMed Central

    Visser, Marco D.; McMahon, Sean M.; Merow, Cory; Dixon, Philip M.; Record, Sydne; Jongejans, Eelke

    2015-01-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1–S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research. PMID:25811842

  16. Scoping Study of Machine Learning Techniques for Visualization and Analysis of Multi-source Data in Nuclear Safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang

    In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less

  17. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.

  18. Non-orthogonal spin-adaptation of coupled cluster methods: A new implementation of methods including quadruple excitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Devin A., E-mail: dmatthews@utexas.edu; Stanton, John F.

    2015-02-14

    The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating anmore » efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))« less

  19. High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.

    PubMed

    Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent

    2016-08-01

    Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.

  20. Amelioration de la precision d'un bras robotise pour une application d'ebavurage

    NASA Astrophysics Data System (ADS)

    Mailhot, David

    Process automation is a more and more referred solution when it comes to complex, tedious or even dangerous tasks for human. Flexibility, low cost and compactness make industrial robots very attractive for automation. Even if many developments have been made to enhance robot's performances, they still can not meet some industries requirements. For instance, aerospace industry requires very tight tolerances on a large variety of parts, which is not what robots were designed for at first. When it comes to robotic deburring, robot imprecision is a major problem that needs to be addressed before it can be implemented in production. This master's thesis explores different calibration techniques for robot's dimensions that could overcome the problem and make the robotic deburring application possible. Some calibration techniques that are easy to implement in production environment are simulated and compared. A calibration technique for tool's dimensions is simulated and implemented to evaluate its potential. The most efficient technique will be used within the application. Finally, the production environment and requirements are explained. The remaining imprecision will be compensated by the use of a force/torque sensor integrated with the robot's controller and by the use of a camera. Many tests are made to define the best parameters to use to deburr a specific feature on a chosen part. Concluding tests are shown and demonstrate the potential use of robotic deburring. Keywords: robotic calibration, robotic arm, robotic precision, robotic deburring

  1. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  2. Nanooptics for high efficient photon managment

    NASA Astrophysics Data System (ADS)

    Wyrowski, Frank; Schimmel, Hagen

    2005-09-01

    Optical systems for photon management, that is the generation of tailored electromagnetic fields, constitute one of the keys for innovation through photonics. An important subfield of photon management deals with the transformation of an incident light field into a field of specified intensity distribution. In this paper we consider some basic aspects of the nature of systems for those light transformations. It turns out, that the transversal redistribution of energy (TRE) is of central concern to achieve systems with high transformation efficiency. Besides established techniques nanostructured optical elements (NOE) are demanded to implement transversal energy redistribution. That builds a bridge between the needs of photon management, optical engineering, and nanooptics.

  3. Design of Efficient Mirror Adder in Quantum- Dot Cellular Automata

    NASA Astrophysics Data System (ADS)

    Mishra, Prashant Kumar; Chattopadhyay, Manju K.

    2018-03-01

    Lower power consumption is an essential demand for portable multimedia system using digital signal processing algorithms and architectures. Quantum dot cellular automata (QCA) is a rising nano technology for the development of high performance ultra-dense low power digital circuits. QCA based several efficient binary and decimal arithmetic circuits are implemented, however important improvements are still possible. This paper demonstrate Mirror Adder circuit design in QCA. We present comparative study of mirror adder cells designed using conventional CMOS technique and mirror adder cells designed using quantum-dot cellular automata. QCA based mirror adders are better in terms of area by order of three.

  4. Connectivity-enhanced route selection and adaptive control for the Chevrolet Volt

    DOE PAGES

    Gonder, Jeffrey; Wood, Eric; Rajagopalan, Sai

    2016-01-01

    The National Renewable Energy Laboratory and General Motors evaluated connectivity-enabled efficiency enhancements for the Chevrolet Volt. A high-level model was developed to predict vehicle fuel and electricity consumption based on driving characteristics and vehicle state inputs. These techniques were leveraged to optimize energy efficiency via green routing and intelligent control mode scheduling, which were evaluated using prospective driving routes between tens of thousands of real-world origin/destination pairs. The overall energy savings potential of green routing and intelligent mode scheduling was estimated at 5% and 3%, respectively. Furthermore, these represent substantial opportunities considering that they only require software adjustments to implement.

  5. Simulation of a Novel Single-column Cryogenic Air Separation Process Using LNG Cold Energy

    NASA Astrophysics Data System (ADS)

    Jieyu, Zheng; Yanzhong, Li; Guangpeng, Li; Biao, Si

    In this paper, a novel single-column air separation process is proposed with the implementation of heat pump technique and introduction of LNG coldenergy. The proposed process is verifiedand optimized through simulation on the Aspen Hysys® platform. Simulation results reveal that thepower consumption per unit mass of liquid productis around 0.218 kWh/kg, and the total exergy efficiency of the systemis 0.575. According to the latest literatures, an energy saving of 39.1% is achieved compared with those using conventional double-column air separation units.The introduction of LNG cold energy is an effective way to increase the system efficiency.

  6. Data-driven Applications for the Sun-Earth System

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.

    2016-12-01

    Advances in observational and data mining techniques allow extracting information from the large volume of Sun-Earth observational data that can be assimilated into first principles physical models. However, equations governing Sun-Earth phenomena are typically nonlinear, complex, and high-dimensional. The high computational demand of solving the full governing equations over a large range of scales precludes the use of a variety of useful assimilative tools that rely on applied mathematical and statistical techniques for quantifying uncertainty and predictability. Effective use of such tools requires the development of computationally efficient methods to facilitate fusion of data with models. This presentation will provide an overview of various existing as well as newly developed data-driven techniques adopted from atmospheric and oceanic sciences that proved to be useful for space physics applications, such as computationally efficient implementation of Kalman Filter in radiation belts modeling, solar wind gap-filling by Singular Spectrum Analysis, and low-rank procedure for assimilation of low-altitude ionospheric magnetic perturbations into the Lyon-Fedder-Mobarry (LFM) global magnetospheric model. Reduced-order non-Markovian inverse modeling and novel data-adaptive decompositions of Sun-Earth datasets will be also demonstrated.

  7. Image processing using Gallium Arsenide (GaAs) technology

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.

    1989-01-01

    The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.

  8. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  9. Selected Systems Engineering Process Deficiencies and Their Consequences

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    2006-01-01

    The systems engineering process is well established and well understood. While this statement could be argued in the light of the many systems engineering guidelines and that have been developed, comparative review of these respective descriptions reveal that they differ primarily in the number of discrete steps or other nuances, and are at their core essentially common. Likewise, the systems engineering textbooks differ primarily in the context for application of systems engineering or in the utilization of evolved tools and techniques, not in the basic method. Thus, failures in systems engineering cannot credibly be attributed to implementation of the wrong systems engineering process among alternatives. However, numerous systems failures can be attributed to deficient implementation of the systems engineering process. What may clearly be perceived as a system engineering deficiency in retrospect can appear to be a well considered system engineering efficiency in real time - an efficiency taken to reduce cost or meet a schedule, or more often both. Typically these efficiencies are grounded on apparently solid rationale, such as reuse of heritage hardware or software. Over time, unintended consequences of a systems engineering process deficiency may begin to be realized, and unfortunately often the consequence is system failure. This paper describes several actual cases of system failures that resulted from deficiencies in their systems engineering process implementation, including the Ariane 5 and the Hubble Space Telescope.

  10. Selected systems engineering process deficiencies and their consequences

    NASA Astrophysics Data System (ADS)

    Thomas, L. Dale

    2007-06-01

    The systems engineering process is well established and well understood. While this statement could be argued in the light of the many systems engineering guidelines and that have been developed, comparative review of these respective descriptions reveal that they differ primarily in the number of discrete steps or other nuances, and are at their core essentially common. Likewise, the systems engineering textbooks differ primarily in the context for application of systems engineering or in the utilization of evolved tools and techniques, not in the basic method. Thus, failures in systems engineering cannot credibly be attributed to implementation of the wrong systems engineering process among alternatives. However, numerous system failures can be attributed to deficient implementation of the systems engineering process. What may clearly be perceived as a systems engineering deficiency in retrospect can appear to be a well considered system engineering efficiency in real time—an efficiency taken to reduce cost or meet a schedule, or more often both. Typically these efficiencies are grounded on apparently solid rationale, such as reuse of heritage hardware or software. Over time, unintended consequences of a systems engineering process deficiency may begin to be realized, and unfortunately often the consequence is systems failure. This paper describes several actual cases of system failures that resulted from deficiencies in their systems engineering process implementation, including the Ariane 5 and the Hubble Space Telescope.

  11. A novel pipeline based FPGA implementation of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2014-05-01

    To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.

  12. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  13. Energy-Efficient Wide Datapath Integer Arithmetic Logic Units Using Superconductor Logic

    NASA Astrophysics Data System (ADS)

    Ayala, Christopher Lawrence

    Complementary Metal-Oxide-Semiconductor (CMOS) technology is currently the most widely used integrated circuit technology today. As CMOS approaches the physical limitations of scaling, it is unclear whether or not it can provide long-term support for niche areas such as high-performance computing and telecommunication infrastructure, particularly with the emergence of cloud computing. Alternatively, superconductor technologies based on Josephson junction (JJ) switching elements such as Rapid Single Flux Quantum (RSFQ) logic and especially its new variant, Energy-Efficient Rapid Single Flux Quantum (ERSFQ) logic have the capability to provide an ultra-high-speed, low power platform for digital systems. The objective of this research is to design and evaluate energy-efficient, high-speed 32-bit integer Arithmetic Logic Units (ALUs) implemented using RSFQ and ERSFQ logic as the first steps towards achieving practical Very-Large-Scale-Integration (VLSI) complexity in digital superconductor electronics. First, a tunable VHDL superconductor cell library is created to provide a mechanism to conduct design exploration and evaluation of superconductor digital circuits from the perspectives of functionality, complexity, performance, and energy-efficiency. Second, hybrid wave-pipelining techniques developed earlier for wide datapath RSFQ designs have been used for efficient arithmetic and logic circuit implementations. To develop the core foundation of the ALU, the ripple-carry adder and the Kogge-Stone parallel prefix carry look-ahead adder are studied as representative candidates on opposite ends of the design spectrum. By combining the high-performance features of the Kogge-Stone structure and the low complexity of the ripple-carry adder, a 32-bit asynchronous wave-pipelined hybrid sparse-tree ALU has been designed and evaluated using the VHDL cell library tuned to HYPRES' gate-level characteristics. The designs and techniques from this research have been implemented using RSFQ logic and prototype chips have been fabricated. As a joint work with HYPRES, a 20 GHz 8-bit Kogge-Stone ALU consisting of 7,950 JJs total has been fabricated using a 1.5 μm 4.5 kA/cm2 process and fully demonstrated. An 8-bit sparse-tree ALU (8,832 JJs total) and a 16-bit sparse-tree adder (12,785 JJs total) have also been fabricated using a 1.0 μm 10 kA/cm 2 process and demonstrated under collaboration with Yokohama National University and Nagoya University (Japan).

  14. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  15. On the Performance of an Algebraic MultigridSolver on Multicore Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Schulz, M; Yang, U M

    2010-04-29

    Algebraic multigrid (AMG) solvers have proven to be extremely efficient on distributed-memory architectures. However, when executed on modern multicore cluster architectures, we face new challenges that can significantly harm AMG's performance. We discuss our experiences on such an architecture and present a set of techniques that help users to overcome the associated problems, including thread and process pinning and correct memory associations. We have implemented most of the techniques in a MultiCore SUPport library (MCSup), which helps to map OpenMP applications to multicore machines. We present results using both an MPI-only and a hybrid MPI/OpenMP model.

  16. Large scale GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  17. Large Scale GW Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  18. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  19. Scalable nuclear density functional theory with Sky3D

    NASA Astrophysics Data System (ADS)

    Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin

    2018-02-01

    In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.

  20. Large scale GW calculations

    DOE PAGES

    Govoni, Marco; Galli, Giulia

    2015-01-12

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  1. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  2. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  3. Projecting non-diffracting waves with intermediate-plane holography.

    PubMed

    Mondal, Argha; Yevick, Aaron; Blackburn, Lauren C; Kanellakopoulos, Nikitas; Grier, David G

    2018-02-19

    We introduce intermediate-plane holography, which substantially improves the ability of holographic trapping systems to project propagation-invariant modes of light using phase-only diffractive optical elements. Translating the mode-forming hologram to an intermediate plane in the optical train can reduce the need to encode amplitude variations in the field, and therefore complements well-established techniques for encoding complex-valued transfer functions into phase-only holograms. Compared to standard holographic trapping implementations, intermediate-plane holograms greatly improve diffraction efficiency and mode purity of propagation-invariant modes, and so increase their useful non-diffracting range. We demonstrate this technique through experimental realizations of accelerating modes and long-range tractor beams.

  4. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  5. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  6. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  7. Eigenvalue routines in NASTRAN: A comparison with the Block Lanczos method

    NASA Technical Reports Server (NTRS)

    Tischler, V. A.; Venkayya, Vipperla B.

    1993-01-01

    The NASA STRuctural ANalysis (NASTRAN) program is one of the most extensively used engineering applications software in the world. It contains a wealth of matrix operations and numerical solution techniques, and they were used to construct efficient eigenvalue routines. The purpose of this paper is to examine the current eigenvalue routines in NASTRAN and to make efficiency comparisons with a more recent implementation of the Block Lanczos algorithm by Boeing Computer Services (BCS). This eigenvalue routine is now available in the BCS mathematics library as well as in several commercial versions of NASTRAN. In addition, CRAY maintains a modified version of this routine on their network. Several example problems, with a varying number of degrees of freedom, were selected primarily for efficiency bench-marking. Accuracy is not an issue, because they all gave comparable results. The Block Lanczos algorithm was found to be extremely efficient, in particular, for very large size problems.

  8. National plan to enhance aviation safety through human factors improvements

    NASA Technical Reports Server (NTRS)

    Foushee, Clay

    1990-01-01

    The purpose of this section of the plan is to establish a development and implementation strategy plan for improving safety and efficiency in the Air Traffic Control (ATC) system. These improvements will be achieved through the proper applications of human factors considerations to the present and future systems. The program will have four basic goals: (1) prepare for the future system through proper hiring and training; (2) develop a controller work station team concept (managing human errors); (3) understand and address the human factors implications of negative system results; and (4) define the proper division of responsibilities and interactions between the human and the machine in ATC systems. This plan addresses six program elements which together address the overall purpose. The six program elements are: (1) determine principles of human-centered automation that will enhance aviation safety and the efficiency of the air traffic controller; (2) provide new and/or enhanced methods and techniques to measure, assess, and improve human performance in the ATC environment; (3) determine system needs and methods for information transfer between and within controller teams and between controller teams and the cockpit; (4) determine how new controller work station technology can optimally be applied and integrated to enhance safety and efficiency; (5) assess training needs and develop improved techniques and strategies for selection, training, and evaluation of controllers; and (6) develop standards, methods, and procedures for the certification and validation of human engineering in the design, testing, and implementation of any hardware or software system element which affects information flow to or from the human.

  9. EM Propagation & Atmospheric Effects Assessment

    DTIC Science & Technology

    2008-09-30

    The split-step Fourier parabolic equation ( SSPE ) algorithm provides the complex amplitude and phase (group delay) of the continuous wave (CW) signal...the APM is based on the SSPE , we are implementing the more efficient Fourier synthesis technique to determine the transfer function. To this end a...needed in order to sample H(f) via the SSPE , and indeed with the proper parameters chosen, the two pulses can be resolved in the time window shown in

  10. Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.

    2012-01-01

    Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.

  11. Design and implementation of visualization methods for the CHANGES Spatial Decision Support System

    NASA Astrophysics Data System (ADS)

    Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan

    2014-05-01

    The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison of data from different locations and a time slider tool for monitoring changes in spatio-temporal data. All these techniques are part of the interactive interface of the system and make use of spatial and spatio-temporal data. Further significant aspects of the visualization component include conventional cartographic techniques and visualization of non-spatial data. The main expectation from the present work is to offer efficient visualization of risk-related data in order to facilitate the decision making process, which is the final purpose of the CHANGES SDSS. This work is part of the "CHANGES" project, funded by the European Community's 7th Framework Programme.

  12. Parallel Implementation of Triangular Cellular Automata for Computing Two-Dimensional Elastodynamic Response on Arbitrary Domains

    NASA Astrophysics Data System (ADS)

    Leamy, Michael J.; Springer, Adam C.

    In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.

  13. Conceptual design of the early implementation of the NEutron Detector Array (NEDA) with AGATA

    NASA Astrophysics Data System (ADS)

    Hüyük, Tayfun; Di Nitto, Antonio; Jaworski, Grzegorz; Gadea, Andrés; Javier Valiente-Dobón, José; Nyberg, Johan; Palacz, Marcin; Söderström, Pär-Anders; Jose Aliaga-Varea, Ramon; de Angelis, Giacomo; Ataç, Ayşe; Collado, Javier; Domingo-Pardo, Cesar; Egea, Francisco Javier; Erduran, Nizamettin; Ertürk, Sefa; de France, Gilles; Gadea, Rafael; González, Vicente; Herrero-Bosch, Vicente; Kaşkaş, Ayşe; Modamio, Victor; Moszynski, Marek; Sanchis, Enrique; Triossi, Andrea; Wadsworth, Robert

    2016-03-01

    The NEutron Detector Array (NEDA) project aims at the construction of a new high-efficiency compact neutron detector array to be coupled with large γ-ray arrays such as AGATA. The application of NEDA ranges from its use as selective neutron multiplicity filter for fusion-evaporation reaction to a large solid angle neutron tagging device. In the present work, possible configurations for the NEDA coupled with the Neutron Wall for the early implementation with AGATA has been simulated, using Monte Carlo techniques, in order to evaluate their performance figures. The goal of this early NEDA implementation is to improve, with respect to previous instruments, efficiency and capability to select multiplicity for fusion-evaporation reaction channels in which 1, 2 or 3 neutrons are emitted. Each NEDA detector unit has the shape of a regular hexagonal prism with a volume of about 3.23l and it is filled with the EJ301 liquid scintillator, that presents good neutron- γ discrimination properties. The simulations have been performed using a fusion-evaporation event generator that has been validated with a set of experimental data obtained in the 58Ni + 56Fe reaction measured with the Neutron Wall detector array.

  14. Efficient evaluation of three-center Coulomb integrals

    PubMed Central

    Samu, Gyula

    2017-01-01

    In this study we pursue the most efficient paths for the evaluation of three-center electron repulsion integrals (ERIs) over solid harmonic Gaussian functions of various angular momenta. First, the adaptation of the well-established techniques developed for four-center ERIs, such as the Obara–Saika, McMurchie–Davidson, Gill–Head-Gordon–Pople, and Rys quadrature schemes, and the combinations thereof for three-center ERIs is discussed. Several algorithmic aspects, such as the order of the various operations and primitive loops as well as prescreening strategies, are analyzed. Second, the number of floating point operations (FLOPs) is estimated for the various algorithms derived, and based on these results the most promising ones are selected. We report the efficient implementation of the latter algorithms invoking automated programming techniques and also evaluate their practical performance. We conclude that the simplified Obara–Saika scheme of Ahlrichs is the most cost-effective one in the majority of cases, but the modified Gill–Head-Gordon–Pople and Rys algorithms proposed herein are preferred for particular shell triplets. Our numerical experiments also show that even though the solid harmonic transformation and the horizontal recurrence require significantly fewer FLOPs if performed at the contracted level, this approach does not improve the efficiency in practical cases. Instead, it is more advantageous to carry out these operations at the primitive level, which allows for more efficient integral prescreening and memory layout. PMID:28571354

  15. Efficient evaluation of three-center Coulomb integrals.

    PubMed

    Samu, Gyula; Kállay, Mihály

    2017-05-28

    In this study we pursue the most efficient paths for the evaluation of three-center electron repulsion integrals (ERIs) over solid harmonic Gaussian functions of various angular momenta. First, the adaptation of the well-established techniques developed for four-center ERIs, such as the Obara-Saika, McMurchie-Davidson, Gill-Head-Gordon-Pople, and Rys quadrature schemes, and the combinations thereof for three-center ERIs is discussed. Several algorithmic aspects, such as the order of the various operations and primitive loops as well as prescreening strategies, are analyzed. Second, the number of floating point operations (FLOPs) is estimated for the various algorithms derived, and based on these results the most promising ones are selected. We report the efficient implementation of the latter algorithms invoking automated programming techniques and also evaluate their practical performance. We conclude that the simplified Obara-Saika scheme of Ahlrichs is the most cost-effective one in the majority of cases, but the modified Gill-Head-Gordon-Pople and Rys algorithms proposed herein are preferred for particular shell triplets. Our numerical experiments also show that even though the solid harmonic transformation and the horizontal recurrence require significantly fewer FLOPs if performed at the contracted level, this approach does not improve the efficiency in practical cases. Instead, it is more advantageous to carry out these operations at the primitive level, which allows for more efficient integral prescreening and memory layout.

  16. A new simple technique for improving the random properties of chaos-based cryptosystems

    NASA Astrophysics Data System (ADS)

    Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.

    2018-03-01

    A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.

  17. Complete denture tooth arrangement technology driven by a reconfigurable rule.

    PubMed

    Dai, Ning; Yu, Xiaoling; Fan, Qilei; Yuan, Fulai; Liu, Lele; Sun, Yuchun

    2018-01-01

    The conventional technique for the fabrication of complete dentures is complex, with a long fabrication process and difficult-to-control restoration quality. In recent years, digital complete denture design has become a research focus. Digital complete denture tooth arrangement is a challenging issue that is difficult to efficiently implement under the constraints of complex tooth arrangement rules and the patient's individualized functional aesthetics. The present study proposes a complete denture automatic tooth arrangement method driven by a reconfigurable rule; it uses four typical operators, including a position operator, a scaling operator, a posture operator, and a contact operator, to establish the constraint mapping association between the teeth and the constraint set of the individual patient. By using the process reorganization of different constraint operators, this method can flexibly implement different clinical tooth arrangement rules. When combined with a virtual occlusion algorithm based on progressive iterative Laplacian deformation, the proposed method can achieve automatic and individual tooth arrangement. Finally, the experimental results verify that the proposed method is flexible and efficient.

  18. Numerically solving the relativistic Grad-Shafranov equation in Kerr spacetimes: numerical techniques

    NASA Astrophysics Data System (ADS)

    Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.

    2018-07-01

    The study of the electrodynamics of static, axisymmetric, and force-free Kerr magnetospheres relies vastly on solutions of the so-called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give a detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established set-ups (split-monopole, paraboloidal, BH disc, uniform).

  19. Cosmic-ray discrimination capabilities of /ΔE-/E silicon nuclear telescopes using neural networks

    NASA Astrophysics Data System (ADS)

    Ambriola, M.; Bellotti, R.; Cafagna, F.; Castellano, M.; Ciacio, F.; Circella, M.; Marzo, C. N. D.; Montaruli, T.

    2000-02-01

    An isotope classifier of cosmic-ray events collected by space detectors has been implemented using a multi-layer perceptron neural architecture. In order to handle a great number of different isotopes a modular architecture of the ``mixture of experts'' type is proposed. The performance of this classifier has been tested on simulated data and has been compared with a ``classical'' classifying procedure. The quantitative comparison with traditional techniques shows that the neural approach has classification performances comparable - within /1% - with that of the classical one, with efficiency of the order of /98%. A possible hardware implementation of such a kind of neural architecture in future space missions is considered.

  20. Numerically solving the relativistic Grad-Shafranov equation in Kerr spacetimes: Numerical techniques

    NASA Astrophysics Data System (ADS)

    Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.

    2018-04-01

    The study of the electrodynamics of static, axisymmetric and force-free Kerr magnetospheres relies vastly on solutions of the so called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established setups (split-monopole, paraboloidal, BH-disk, uniform).

  1. Benchmarking of Computational Models for NDE and SHM of Composites

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  2. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  3. Enabling Exploration Missions Now: Applications of On-orbit Staging

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Vaughn, Frank; Westmeyer, Paul; Rawitscher, Gary; Bordi, Francesco

    2005-01-01

    Future NASA Exploration goals are difficult to meet using current launch vehicle implementations and techniques. We introduce a concept of On-Orbit Staging (OOS) using multiple launches into a Low Earth orbit (LEO) staging area to increase payload mass and reduce overall cost for exploration initiative missions. This concept is a forward-looking implementation of ideas put forth by Oberth and Von Braun to address the total mission design. Applying staging throughout the mission and utilizing technological advances in propulsion efficiency and architecture enable us to show that exploration goals can be met in the next decade. As part of this architecture, we assume the readiness of automated rendezvous, docking, and assembly technology.

  4. Model Checking with Edge-Valued Decision Diagrams

    NASA Technical Reports Server (NTRS)

    Roux, Pierre; Siminiceanu, Radu I.

    2010-01-01

    We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster

  5. Generation of structural topologies using efficient technique based on sorted compliances

    NASA Astrophysics Data System (ADS)

    Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.

  6. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-10-01

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.

  7. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  8. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    NASA Astrophysics Data System (ADS)

    Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia

    2013-08-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.

  9. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  10. High Performance GPU-Based Fourier Volume Rendering.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr

    2015-01-01

    Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)log⁡N) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.

  11. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  12. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  13. Finding a roadmap to achieve large neuromorphic hardware systems

    PubMed Central

    Hasler, Jennifer; Marr, Bo

    2013-01-01

    Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330

  14. Third International Symposium on Space Mission Operations and Ground Data Systems, part 1

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1994-01-01

    Under the theme of 'Opportunities in Ground Data Systems for High Efficiency Operations of Space Missions,' the SpaceOps '94 symposium included presentations of more than 150 technical papers spanning five topic areas: Mission Management, Operations, Data Management, System Development, and Systems Engineering. The papers focus on improvements in the efficiency, effectiveness, productivity, and quality of data acquisition, ground systems, and mission operations. New technology, techniques, methods, and human systems are discussed. Accomplishments are also reported in the application of information systems to improve data retrieval, reporting, and archiving; the management of human factors; the use of telescience and teleoperations; and the design and implementation of logistics support for mission operations.

  15. High efficiency Raman memory by suppressing radiation trapping

    NASA Astrophysics Data System (ADS)

    Thomas, S. E.; Munns, J. H. D.; Kaczmarek, K. T.; Qiu, C.; Brecht, B.; Feizpour, A.; Ledingham, P. M.; Walmsley, I. A.; Nunn, J.; Saunders, D. J.

    2017-06-01

    Raman interactions in alkali vapours are used in applications such as atomic clocks, optical signal processing, generation of squeezed light and Raman quantum memories for temporal multiplexing. To achieve a strong interaction the alkali ensemble needs both a large optical depth and a high level of spin-polarisation. We implement a technique known as quenching using a molecular buffer gas which allows near-perfect spin-polarisation of over 99.5 % in caesium vapour at high optical depths of up to ˜ 2× {10}5; a factor of 4 higher than can be achieved without quenching. We use this system to explore efficient light storage with high gain in a GHz bandwidth Raman memory.

  16. Quantum rendering

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.

    2003-08-01

    In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.

  17. Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep

    2011-05-01

    This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.

  18. Holographic implementation of a binary associative memory for improved recognition

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.

    1998-03-01

    Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.

  19. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  20. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  1. Ultrafine-Grained Pure Ti Processed by New SPD Scheme Combining Drawing with Shear

    NASA Astrophysics Data System (ADS)

    Raab, A. G.; Bobruk, E. V.; Raab, G. I.

    2018-05-01

    The paper displays the results of the studies and analysis of a promising severe plastic deformation scheme that implements the conditions of a non-monotonous impact during shear drawing of long-length bulk metal materials. The paper describes the efficiency of the proposed severe plastic deformation technique to form a gradient ultrafine-grained state in rod-shaped billets on the example of commercially pure Ti and its further development for future industrial applications.

  2. A Multiscale Software Tool for Field/Circuit Co-Simulation

    DTIC Science & Technology

    2011-12-15

    technology fields: Number of graduating undergraduates who achieved a 3.5 GPA to 4.0 (4.0 max scale): Number of graduating undergraduates funded by a...times more efficient than FDTD for such a problem in 3D . The techniques in class (c) above include the discontinuous Galerkin method and multidomain...implements a finite-differential-time-domain method on single field propagation in a 3D space. We consider a cavity model which includes two electric

  3. Energy conservation in housing design using solar energy, mechanical system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakir, N.M.W.

    1985-01-01

    This paper presents the first experimental full-scale house built by the Solar Energy Research Center of Baghdad to be heated and cooled by solar energy. The various architectural and environmental considerations which entered into the design process are discussed, as well as the range of passive techniques examined for their compatibility with the local climate and their ability to optimize the energy efficiency of the house. The mechanical systems which were ultimately implemented are described.

  4. Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radiation therapy: Cone/ring artifact correction and multiple GPU implementation

    PubMed Central

    Yan, Hao; Wang, Xiaoyu; Shi, Feng; Bai, Ti; Folkerts, Michael; Cervino, Laura; Jiang, Steve B.; Jia, Xun

    2014-01-01

    Purpose: Compressed sensing (CS)-based iterative reconstruction (IR) techniques are able to reconstruct cone-beam CT (CBCT) images from undersampled noisy data, allowing for imaging dose reduction. However, there are a few practical concerns preventing the clinical implementation of these techniques. On the image quality side, data truncation along the superior–inferior direction under the cone-beam geometry produces severe cone artifacts in the reconstructed images. Ring artifacts are also seen in the half-fan scan mode. On the reconstruction efficiency side, the long computation time hinders clinical use in image-guided radiation therapy (IGRT). Methods: Image quality improvement methods are proposed to mitigate the cone and ring image artifacts in IR. The basic idea is to use weighting factors in the IR data fidelity term to improve projection data consistency with the reconstructed volume. In order to improve the computational efficiency, a multiple graphics processing units (GPUs)-based CS-IR system was developed. The parallelization scheme, detailed analyses of computation time at each step, their relationship with image resolution, and the acceleration factors were studied. The whole system was evaluated in various phantom and patient cases. Results: Ring artifacts can be mitigated by properly designing a weighting factor as a function of the spatial location on the detector. As for the cone artifact, without applying a correction method, it contaminated 13 out of 80 slices in a head-neck case (full-fan). Contamination was even more severe in a pelvis case under half-fan mode, where 36 out of 80 slices were affected, leading to poorer soft tissue delineation and reduced superior–inferior coverage. The proposed method effectively corrects those contaminated slices with mean intensity differences compared to FDK results decreasing from ∼497 and ∼293 HU to ∼39 and ∼27 HU for the full-fan and half-fan cases, respectively. In terms of efficiency boost, an overall 3.1 × speedup factor has been achieved with four GPU cards compared to a single GPU-based reconstruction. The total computation time is ∼30 s for typical clinical cases. Conclusions: The authors have developed a low-dose CBCT IR system for IGRT. By incorporating data consistency-based weighting factors in the IR model, cone/ring artifacts can be mitigated. A boost in computational efficiency is achieved by multi-GPU implementation. PMID:25370645

  5. Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radiation therapy: Cone/ring artifact correction and multiple GPU implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hao, E-mail: steve.jiang@utsouthwestern.edu, E-mail: xun.jia@utsouthwestern.edu; Shi, Feng; Jiang, Steve B.

    Purpose: Compressed sensing (CS)-based iterative reconstruction (IR) techniques are able to reconstruct cone-beam CT (CBCT) images from undersampled noisy data, allowing for imaging dose reduction. However, there are a few practical concerns preventing the clinical implementation of these techniques. On the image quality side, data truncation along the superior–inferior direction under the cone-beam geometry produces severe cone artifacts in the reconstructed images. Ring artifacts are also seen in the half-fan scan mode. On the reconstruction efficiency side, the long computation time hinders clinical use in image-guided radiation therapy (IGRT). Methods: Image quality improvement methods are proposed to mitigate the conemore » and ring image artifacts in IR. The basic idea is to use weighting factors in the IR data fidelity term to improve projection data consistency with the reconstructed volume. In order to improve the computational efficiency, a multiple graphics processing units (GPUs)-based CS-IR system was developed. The parallelization scheme, detailed analyses of computation time at each step, their relationship with image resolution, and the acceleration factors were studied. The whole system was evaluated in various phantom and patient cases. Results: Ring artifacts can be mitigated by properly designing a weighting factor as a function of the spatial location on the detector. As for the cone artifact, without applying a correction method, it contaminated 13 out of 80 slices in a head-neck case (full-fan). Contamination was even more severe in a pelvis case under half-fan mode, where 36 out of 80 slices were affected, leading to poorer soft tissue delineation and reduced superior–inferior coverage. The proposed method effectively corrects those contaminated slices with mean intensity differences compared to FDK results decreasing from ∼497 and ∼293 HU to ∼39 and ∼27 HU for the full-fan and half-fan cases, respectively. In terms of efficiency boost, an overall 3.1 × speedup factor has been achieved with four GPU cards compared to a single GPU-based reconstruction. The total computation time is ∼30 s for typical clinical cases. Conclusions: The authors have developed a low-dose CBCT IR system for IGRT. By incorporating data consistency-based weighting factors in the IR model, cone/ring artifacts can be mitigated. A boost in computational efficiency is achieved by multi-GPU implementation.« less

  6. Evaluation of Extraction and Degradation Methods to Obtain Chickpeasaponin B1 from Chickpea (Cicer arietinum L.).

    PubMed

    Cheng, Kun; Gao, Hua; Wang, Rong-Rong; Liu, Yang; Hou, Yu-Xue; Liu, Xiao-Hong; Liu, Kun; Wang, Wei

    2017-02-21

    The objective of this research is to implement extraction and degradation methods for the obtainment of 3- O -[α-l-rhamnopyranosyl-(1→2)-β-d-galactopyranosyl] soyasapogenol B (chickpeasaponin B1) from chickpea. The effects of microwave-assisted extraction (MAE) processing parameters-such as ethanol concentration, solvent/solid ratio, extraction temperature, microwave irradiation power, and irradiation time-were evaluated. Using 1g of material with 8 mL of 70% aqueous ethanol and an extraction time of 10 min at 70 °C under irradiation power 400W provided optimal extraction conditions. Compared with the conventional extraction techniques, including heat reflux extraction (HRE), Soxhlet extraction (SE), and ultrasonic extraction (UE), MAE produced higher extraction efficiency under a lower extraction time. DDMP (2,3-dihydro-2,5-dihydroxy-6-methyl-4 H -pyran-4-one) saponin can be degraded to structurally stable saponin B by the loss of its DDMP group. The influence of pH and the concentration of potassium hydroxide on transformation efficiency of the target compound was investigated. A solution of 0.25 M potassium hydroxide in 75% aqueous ethanol was suitable for converting the corresponding DDMP saponins of chickpeasaponin B1. The implementation by the combining MAE technique and alkaline hydrolysis method for preparing chickpeasaponin B1 provides a convenient technology for future applications.

  7. Generation of Transgenic Pigs by Cytoplasmic Injection of piggyBac Transposase-Based pmGENIE-3 Plasmids1

    PubMed Central

    Li, Zicong; Zeng, Fang; Meng, Fanming; Xu, Zhiqian; Zhang, Xianwei; Huang, Xiaoling; Tang, Fei; Gao, Wenchao; Shi, Junsong; He, Xiaoyan; Liu, Dewu; Wang, Chong; Urschitz, Johann; Moisyadi, Stefan; Wu, Zhenfang

    2014-01-01

    ABSTRACT The process of transgenesis involves the introduction of a foreign gene, the transgene, into the genome of an animal. Gene transfer by pronuclear microinjection (PNI) is the predominant method used to produce transgenic animals. However, this technique does not always result in germline transgenic offspring and has a low success rate for livestock. Alternate approaches, such as somatic cell nuclear transfer using transgenic fibroblasts, do not show an increase in efficiency compared to PNI, while viral-based transgenesis is hampered by issues regarding transgene size and biosafety considerations. We have recently described highly successful transgenesis experiments with mice using a piggyBac transposase-based vector, pmhyGENIE-3. This construct, a single and self-inactivating plasmid, contains all the transpositional elements necessary for successful gene transfer. In this series of experiments, our laboratories have implemented cytoplasmic injection (CTI) of pmGENIE-3 for transgene delivery into in vivo-fertilized pig zygotes. More than 8.00% of the injected embryos developed into transgenic animals containing monogenic and often single transgenes in their genome. However, the CTI technique was unsuccessful during the injection of in vitro-fertilized pig zygotes. In summary, here we have described a method that is not only easy to implement, but also demonstrated the highest efficiency rate for nonviral livestock transgenesis. PMID:24671876

  8. A Fixed Point VHDL Component Library for a High Efficiency Reconfigurable Radio Design Methodology

    NASA Technical Reports Server (NTRS)

    Hoy, Scott D.; Figueiredo, Marco A.

    2006-01-01

    Advances in Field Programmable Gate Array (FPGA) technologies enable the implementation of reconfigurable radio systems for both ground and space applications. The development of such systems challenges the current design paradigms and requires more robust design techniques to meet the increased system complexity. Among these techniques is the development of component libraries to reduce design cycle time and to improve design verification, consequently increasing the overall efficiency of the project development process while increasing design success rates and reducing engineering costs. This paper describes the reconfigurable radio component library developed at the Software Defined Radio Applications Research Center (SARC) at Goddard Space Flight Center (GSFC) Microwave and Communications Branch (Code 567). The library is a set of fixed-point VHDL components that link the Digital Signal Processing (DSP) simulation environment with the FPGA design tools. This provides a direct synthesis path based on the latest developments of the VHDL tools as proposed by the BEE VBDL 2004 which allows for the simulation and synthesis of fixed-point math operations while maintaining bit and cycle accuracy. The VHDL Fixed Point Reconfigurable Radio Component library does not require the use of the FPGA vendor specific automatic component generators and provide a generic path from high level DSP simulations implemented in Mathworks Simulink to any FPGA device. The access to the component synthesizable, source code provides full design verification capability:

  9. A look-up-table digital predistortion technique for high-voltage power amplifiers in ultrasonic applications.

    PubMed

    Gao, Zheng; Gui, Ping

    2012-07-01

    In this paper, we present a digital predistortion technique to improve the linearity and power efficiency of a high-voltage class-AB power amplifier (PA) for ultrasound transmitters. The system is composed of a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and a field-programmable gate array (FPGA) in which the digital predistortion (DPD) algorithm is implemented. The DPD algorithm updates the error, which is the difference between the ideal signal and the attenuated distorted output signal, in the look-up table (LUT) memory during each cycle of a sinusoidal signal using the least-mean-square (LMS) algorithm. On the next signal cycle, the error data are used to equalize the signal with negative harmonic components to cancel the amplifier's nonlinear response. The algorithm also includes a linear interpolation method applied to the windowed sinusoidal signals for the B-mode and Doppler modes. The measurement test bench uses an arbitrary function generator as the DAC to generate the input signal, an oscilloscope as the ADC to capture the output waveform, and software to implement the DPD algorithm. The measurement results show that the proposed system is able to reduce the second-order harmonic distortion (HD2) by 20 dB and the third-order harmonic distortion (HD3) by 14.5 dB, while at the same time improving the power efficiency by 18%.

  10. a New Approach for Progressive Dense Reconstruction from Consecutive Images Based on Prior Low-Density 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2017-09-01

    In recent years, the increasing incidence of climate-related disasters has tremendously affected our environment. In order to effectively manage and reduce dramatic impacts of such events, the development of timely disaster management plans is essential. Since these disasters are spatial phenomena, timely provision of geospatial information is crucial for effective development of response and management plans. Due to inaccessibility of the affected areas and limited budget of first-responders, timely acquisition of the required geospatial data for these applications is usually possible only using low-cost imaging and georefencing sensors mounted on unmanned platforms. Despite rapid collection of the required data using these systems, available processing techniques are not yet capable of delivering geospatial information to responders and decision makers in a timely manner. To address this issue, this paper introduces a new technique for dense 3D reconstruction of the affected scenes which can deliver and improve the needed geospatial information incrementally. This approach is implemented based on prior 3D knowledge of the scene and employs computationally-efficient 2D triangulation, feature descriptor, feature matching and point verification techniques to optimize and speed up 3D dense scene reconstruction procedure. To verify the feasibility and computational efficiency of the proposed approach, an experiment using a set of consecutive images collected onboard a UAV platform and prior low-density airborne laser scanning over the same area is conducted and step by step results are provided. A comparative analysis of the proposed approach and an available image-based dense reconstruction technique is also conducted to prove the computational efficiency and competency of this technique for delivering geospatial information with pre-specified accuracy.

  11. Efficient computational model for classification of protein localization images using Extended Threshold Adjacency Statistics and Support Vector Machines.

    PubMed

    Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad

    2018-04-01

    Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Sawja: Static Analysis Workshop for Java

    NASA Astrophysics Data System (ADS)

    Hubert, Laurent; Barré, Nicolas; Besson, Frédéric; Demange, Delphine; Jensen, Thomas; Monfort, Vincent; Pichardie, David; Turpin, Tiphaine

    Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. Efficiency and precision of such a tool rely partly on low level components which only depend on the syntactic structure of the language and therefore should not be redesigned for each implementation of a new static analysis. This paper describes the Sawja library: a static analysis workshop fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including i) efficient functional data-structures for representing a program with implicit sharing and lazy parsing, ii) an intermediate stack-less representation, and iii) fast computation and manipulation of complete programs. We provide experimental evaluations of the different features with respect to time, memory and precision.

  13. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  14. Hydration Free Energy from Orthogonal Space Random Walk and Polarizable Force Field.

    PubMed

    Abella, Jayvee R; Cheng, Sara Y; Wang, Qiantao; Yang, Wei; Ren, Pengyu

    2014-07-08

    The orthogonal space random walk (OSRW) method has shown enhanced sampling efficiency in free energy calculations from previous studies. In this study, the implementation of OSRW in accordance with the polarizable AMOEBA force field in TINKER molecular modeling software package is discussed and subsequently applied to the hydration free energy calculation of 20 small organic molecules, among which 15 are positively charged and five are neutral. The calculated hydration free energies of these molecules are compared with the results obtained from the Bennett acceptance ratio method using the same force field, and overall an excellent agreement is obtained. The convergence and the efficiency of the OSRW are also discussed and compared with BAR. Combining enhanced sampling techniques such as OSRW with polarizable force fields is very promising for achieving both accuracy and efficiency in general free energy calculations.

  15. Impedance matching wireless power transmission system for biomedical devices.

    PubMed

    Lum, Kin Yun; Lindén, Maria; Tan, Tian Swee

    2015-01-01

    For medical application, the efficiency and transmission distance of the wireless power transfer (WPT) are always the main concern. Research has been showing that the impedance matching is one of the critical factors for dealing with the problem. However, there is not much work performed taking both the source and load sides into consideration. Both sides matching is crucial in achieving an optimum overall performance, and the present work proposes a circuit model analysis for design and implementation. The proposed technique was validated against experiment and software simulation. Result was showing an improvement in transmission distance up to 6 times, and efficiency at this transmission distance had been improved up to 7 times as compared to the impedance mismatch system. The system had demonstrated a near-constant transfer efficiency for an operating range of 2cm-12cm.

  16. Efficient statistical mapping of avian count data

    USGS Publications Warehouse

    Royle, J. Andrew; Wikle, C.K.

    2005-01-01

    We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.

  17. Directed Incremental Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz

    2011-01-01

    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.

  18. Nektar++: An open-source spectral/ hp element framework

    NASA Astrophysics Data System (ADS)

    Cantwell, C. D.; Moxey, D.; Comerford, A.; Bolis, A.; Rocco, G.; Mengaldo, G.; De Grazia, D.; Yakovlev, S.; Lombard, J.-E.; Ekelschot, D.; Jordi, B.; Xu, H.; Mohamied, Y.; Eskilsson, C.; Nelson, B.; Vos, P.; Biotto, C.; Kirby, R. M.; Sherwin, S. J.

    2015-07-01

    Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/ hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy over low-order techniques at reduced computational cost for a given number of degrees of freedom. However, their proliferation is often limited by their complexity, which makes these methods challenging to implement and use. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectral/ hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.

  19. Unconventional methods of imaging: computational microscopy and compact implementations

    NASA Astrophysics Data System (ADS)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  20. Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.

    1991-01-01

    The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.

  1. Investigation of an intelligent system for fiber optic-based epidural anesthesia.

    PubMed

    Gong, Cihun-Siyong Alex; Ting, Chien-Kun

    2014-01-01

    Even though there have been many approaches to assist the anesthesiologists in performing regional anesthesia, none of the prior arts may be said as an unrestricted technique. The lack of a design that is with sufficient sensitivity to the targets of interest and automatic indication of needle placement makes it difficult to all-round implementation of field usage of objectiveness. In addition, light-weight easy-to-use realization is the key point of portability. This paper reports on an intelligent system of epidural space identification using optical technique, with particular emphasis on efficiency-enhanced aspects. Statistical algorithms, implemented in a dedicated field-programmable hardware platform along with an on-platform application-specific integrated chip, used to advance real-time self decision making in needle advancement are discussed together with the feedback results. Clinicians' viewpoint of improving the correct rate of our technique is explained in detail. Our study demonstrates not only that the improved system is able to behave as if it is a skillful anesthesiologist but also it has potential to bring promising assist into clinical use under varied conditions and small amount of sample, provided that several concerns are addressed.

  2. Spike: Artificial intelligence scheduling for Hubble space telescope

    NASA Technical Reports Server (NTRS)

    Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert

    1990-01-01

    Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.

  3. Exploiting the cannibalistic traits of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Collins, O.

    1993-01-01

    In Reed-Solomon codes and all other maximum distance separable codes, there is an intrinsic relationship between the size of the symbols in a codeword and the length of the codeword. Increasing the number of symbols in a codeword to improve the efficiency of the coding system thus requires using a larger set of symbols. However, long Reed-Solomon codes are difficult to implement and many communications or storage systems cannot easily accommodate an increased symbol size, e.g., M-ary frequency shift keying (FSK) and photon-counting pulse-position modulation demand a fixed symbol size. A technique for sharing redundancy among many different Reed-Solomon codewords to achieve the efficiency attainable in long Reed-Solomon codes without increasing the symbol size is described. Techniques both for calculating the performance of these new codes and for determining their encoder and decoder complexities is presented. These complexities are usually found to be substantially lower than conventional Reed-Solomon codes of similar performance.

  4. Applying the metro map to software development management

    NASA Astrophysics Data System (ADS)

    Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción

    2010-01-01

    This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.

  5. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  6. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald.

    PubMed

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2014-02-28

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.

  7. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald

    PubMed Central

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2015-01-01

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230

  8. An eco-efficient and economical optimum evaluation technique for the forest road networks: the case of the mountainous forest of Metsovo, Greece.

    PubMed

    Tampekis, Stergios; Samara, Fani; Sakellariou, Stavros; Sfougaris, Athanassios; Christopoulou, Olga

    2018-02-12

    The sustainable forest management can be achieved only through environmentally sound and economically efficient and feasible forest road networks and transportation systems that can potentially improve the multi-functional use of forest resources. However, road network planning and construction suggest long-term finance that require a capital investment (cash outflow), which would be equal to the value of the total revenue flow (cash inflow) over the whole lifecycle project. This paper emphasizes in an eco-efficient and economical optimum evaluation method for the forest road networks in the mountainous forest of Metsovo, Greece. More specifically, with the use of this technique, we evaluated the forest roads' (a) total construction costs, (b) annual maintenance cost, and (c) log skidding cost. In addition, we estimated the total economic value of forest goods and services that are lost from the forest roads' construction. Finally, we assessed the optimum eco-efficient and economical forest roads densities based on linear equations that stem from the internal rate of return method (IRR) and have been presented graphically. Data analysis and its presentation are achieved with the contribution of geographic information systems (GIS). The technique which is described in this study can be for the decision makers an attractive and useful implement in order to select the most eco-friendly and economical optimum solution to plan forest road network or to evaluate the existing forest transportation systems. Hence, with the use of this method, we can combine not only the multi-objective utilization of natural resources but also the environmental protection of forest ecosystems.

  9. Pharmacokinetic analysis and drug delivery efficiency of the focused ultrasound-induced blood-brain barrier opening in non-human primates

    PubMed Central

    Samiotaki, Gesthimani; Karakatsani, Maria Eleni; Buch, Amanda; Papadopoulos, Stephanos; Wu, Shih Ying; Jambawalikar, Sachin; Konofagou, Elisa E.

    2016-01-01

    Purpose Focused Ultrasound (FUS) in conjunction with systemically administered microbubbles has been shown to open the Blood-Brain Barrier (BBB) locally, non-invasively and reversibly in rodents and non-human primates (NHP), suggesting the immense potential of this technique. The objective of this study entailed the investigation of the physiologic changes in the brain following the FUS-induced BBB opening and their relationship with the underlying anatomy. Materials and Methods Pharmacokinetic analysis was implemented in NHP’s that received FUS at various acoustic pressures. Relaxivity mapping enabled the robust quantitative detection of the BBB opening as well as gray and white matter segmentation. Drug delivery efficiency was measured for pre-clinical validation of the technique. Results Based on our results, the opening volume and the amount of the gadolinium delivered were found mostly contained in the grey matter, while FUS-induced permeability and drug concentration varied depending upon the underlying brain inhomogeneity, and increased with the acoustic pressure. Conclusions Overall, apart from the in vivo protocols for BBB analysis developed here, this study also suggests the important role that FUS can have in efficient drug delivery via localized and transient BBB opening. PMID:27916657

  10. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques.

    PubMed

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-08-28

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower(®), firstly guides the user to appropriately take an inflorescence photo using the smartphone's camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower(®) has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application's efficiency on four different devices covering a wide range of the market's spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play.

  11. Acceleration of FDTD mode solver by high-performance computing techniques.

    PubMed

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  12. Accelerating Subsurface Transport Simulation on Heterogeneous Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino

    Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy,more » LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272 and a Tesla M2090 per node.« less

  13. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.

    PubMed

    Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine

    2015-03-15

    Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.

  14. Three-Dimensional Microwave Hyperthermia for Breast Cancer Treatment in a Realistic Environment Using Particle Swarm Optimization.

    PubMed

    Nguyen, Phong Thanh; Abbosh, Amin; Crozier, Stuart

    2017-06-01

    In this paper, a technique for noninvasive microwave hyperthermia treatment for breast cancer is presented. In the proposed technique, microwave hyperthermia of patient-specific breast models is implemented using a three-dimensional (3-D) antenna array based on differential beam-steering subarrays to locally raise the temperature of the tumor to therapeutic values while keeping healthy tissue at normal body temperature. This approach is realized by optimizing the excitations (phases and amplitudes) of the antenna elements using the global optimization method particle swarm optimization. The antennae excitation phases are optimized to maximize the power at the tumor, whereas the amplitudes are optimized to accomplish the required temperature at the tumor. During the optimization, the technique ensures that no hotspots exist in healthy tissue. To implement the technique, a combination of linked electromagnetic and thermal analyses using MATLAB and the full-wave electromagnetic simulator is conducted. The technique is tested at 4.2 GHz, which is a compromise between the required power penetration and focusing, in a realistic simulation environment, which is built using a 3-D antenna array of 4 × 6 unidirectional antenna elements. The presented results on very dense 3-D breast models, which have the realistic dielectric and thermal properties, validate the capability of the proposed technique in focusing power at the exact location and volume of tumor even in the challenging cases where tumors are embedded in glands. Moreover, the models indicate the capability of the technique in dealing with tumors at different on- and off-axis locations within the breast with high efficiency in using the microwave power.

  15. Study of fault-tolerant software technology

    NASA Technical Reports Server (NTRS)

    Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.

    1984-01-01

    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.

  16. A screen-printed flexible flow sensor

    NASA Astrophysics Data System (ADS)

    Moschos, A.; Syrovy, T.; Syrova, L.; Kaltsas, G.

    2017-04-01

    A thermal flow sensor was printed on a flexible plastic substrate using exclusively screen-printing techniques. The presented device was implemented with custom made screen-printed thermistors, which allows simple, cost-efficient production on a variety of flexible substrates while maintaining the typical advantages of thermal flow sensors. Evaluation was performed for both static (zero flow) and dynamic conditions using a combination of electrical measurements and IR imaging techniques in order to determine important characteristics, such as temperature response, output repeatability, etc. The flow sensor was characterized utilizing the hot-wire and calorimetric principles of operation, while the preliminary results appear to be very promising, since the sensor was successfully evaluated and displayed adequate sensitivity in a relatively wide flow range.

  17. Chaotic coordinates for the Large Helical Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, S. R., E-mail: shudson@pppl.gov; Suzuki, Y.

    The theory of quadratic-flux-minimizing (QFM) surfaces is reviewed, and numerical techniques that allow high-order QFM surfaces to be efficiently constructed for experimentally relevant, non-integrable magnetic fields are described. As a practical example, the chaotic edge of the magnetic field in the Large Helical Device (LHD) is examined. A precise technique for finding the boundary surface is implemented, the hierarchy of partial barriers associated with the near-critical cantori is constructed, and a coordinate system, which we call chaotic coordinates, that is based on a selection of QFM surfaces is constructed that simplifies the description of the magnetic field, so that fluxmore » surfaces become “straight” and islands become “square.”.« less

  18. A Closed-tube Loop-Mediated Isothermal Amplification Assay for the Visual Endpoint Detection of Brucella spp. and Mycobacterium avium subsp. paratuberculosis.

    PubMed

    Trangoni, Marcos D; Gioffré, Andrea K; Cravero, Silvio L

    2017-01-01

    LAMP (loop-mediated isothermal amplification) is an isothermal nucleic acid amplification technique that is characterized by its efficiency, rapidity, high yield of final product, robustness, sensitivity, and specificity, with the blueprint that it can be implemented in laboratories of low technological complexity. Despite the conceptual complexity underlying the mechanistic basis for the nucleic acid amplification, the technique is simple to use and the amplification and detection can be carried out in just one step. In this chapter, we present a protocol based on LAMP for the rapid identification of isolates of Brucella spp. and Mycobacterium avium subsp. paratuberculosis, two major bacterial pathogens in veterinary medicine.

  19. Gulf Coast Clean Energy Application Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillingham, Gavin

    The Gulf Coast Clean Energy Application Center was initiated to significantly improve market and regulatory conditions for the implementation of combined heat and power technologies. The GC CEAC was responsible for the development of CHP in Texas, Louisiana and Oklahoma. Through this program we employed a variety of outreach and education techniques, developed and deployed assessment tools and conducted market assessments. These efforts resulted in the growth of the combined heat and power market in the Gulf Coast region with a realization of more efficient energy generation, reduced emissions and a more resilient infrastructure. Specific t research, we did notmore » formally investigate any techniques with any formal research design or methodology.« less

  20. Noiseless coding for the magnetometer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Lee, Jun-Ji

    1987-01-01

    Future unmanned space missions will continue to seek a full understanding of magnetic fields throughout the solar system. Severely constrained data rates during certain portions of these missions could limit the possible science return. This publication investigates the application of universal noiseless coding techniques to more efficiently represent magnetometer data without any loss in data integrity. Performance results indicated that compression factors of 2:1 to 6:1 can be expected. Feasibility for general deep space application was demonstrated by implementing a microprocessor breadboard coder/decoder using the Intel 8086 processor. The Comet Rendezvous Asteroid Flyby mission will incorporate these techniques in a buffer feedback, rate-controlled configuration. The characteristics of this system are discussed.

  1. Efficient and robust analysis of complex scattering data under noise in microwave resonators.

    PubMed

    Probst, S; Song, F B; Bushev, P A; Ustinov, A V; Weides, M

    2015-02-01

    Superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research. A reliable determination of their external and internal quality factors is crucial for many modern applications, which either require fast measurements or operate in the single photon regime with small signal to noise ratios. Here, we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise. The speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle.

  2. Multigrid techniques for the solution of the passive scalar advection-diffusion equation

    NASA Technical Reports Server (NTRS)

    Phillips, R. E.; Schmidt, F. W.

    1985-01-01

    The solution of elliptic passive scalar advection-diffusion equations is required in the analysis of many turbulent flow and convective heat transfer problems. The accuracy of the solution may be affected by the presence of regions containing large gradients of the dependent variables. The multigrid concept of local grid refinement is a method for improving the accuracy of the calculations in these problems. In combination with the multilevel acceleration techniques, an accurate and efficient computational procedure is developed. In addition, a robust implementation of the QUICK finite-difference scheme is described. Calculations of a test problem are presented to quantitatively demonstrate the advantages of the multilevel-multigrid method.

  3. Impact of Lean on patient cycle and waiting times at a rural district hospital in KwaZulu-Natal

    PubMed Central

    Naidoo, Logandran

    2016-01-01

    Background Prolonged waiting time is a source of patient dissatisfaction with health care and is negatively associated with patient satisfaction. Prolonged waiting times in many district hospitals result in many dissatisfied patients, overworked and frustrated staff, and poor quality of care because of the perceived increased workload. Aim The aim of the study was to determine the impact of Lean principles techniques, and tools on the operational efficiency in the outpatient department (OPD) of a rural district hospital. Setting The study was conducted at the Catherine Booth Hospital (CBH) – a rural district hospital in KwaZulu-Natal, South Africa. Methods This was an action research study with pre-, intermediate-, and post-implementation assessments. Cycle and waiting times were measured by direct observation on two occasions before, approximately two-weekly during, and on two occasions after Lean implementation. A standardised data collection tool was completed by the researcher at each of the six key service nodes in the OPD to capture the waiting times and cycle times. Results All six service nodes showed a reduction in cycle times and waiting times between the baseline assessment and post-Lean implementation measurement. Significant reduction was achieved in cycle times (27%; p < 0.05) and waiting times (from 11.93 to 10 min; p = 0.03) at the Investigations node. Although the target reduction was not achieved for the Consulting Room node, there was a significant reduction in waiting times from 80.95 to 74.43 min, (p < 0.001). The average efficiency increased from 16.35% (baseline) to 20.13% (post-intervention). Conclusion The application of Lean principles, tools and techniques provides hospital managers with an evidence-based management approach to resolving problems and improving quality indicators. PMID:27543283

  4. Efficiency of modified chemical remediation techniques for soil contaminated by organochlorine pesticides

    NASA Astrophysics Data System (ADS)

    Correa-Torres, S. N.; Kopytko, M.; Avila, S.

    2016-07-01

    This study reports the optimization of innovation chemical techniques in order to improve the remediation of soils contaminated with organochloride pesticides. The techniques used for remediation were dehalogenation and chemical oxidation in soil contaminated by pesticides. These techniques were applied sequentially and combined to evaluate the design optimize the concentration and contact time variables. The soil of this study was collect in cotton crop zone in Agustin Codazzi municipality, Colombia, and its physical properties was measure. The modified dehalogenation technique of EPA was applied on the contaminated soil by adding Sodium Bicarbonate solution at different concentrations and rates during 4, 7 and 14 days, subsequently oxidation technique was implemented by applying a solution of KMnO4 at different concentration and reaction times. Organochlorine were detected by Gas Chromatography analysis coupled Mass Spectrometry and its removals were between 85.4- 90.0% of compounds such as 4, 4’-DDT, 4,4’-DDD, 4,4-DDE, trans-Clordane y Endrin. These results demonstrate that the technique of dehalogenation with oxidation chemistry can be used for remediation soils contaminated by organochloride pesticides.

  5. Resonant fiber optic gyro based on a sinusoidal wave modulation and square wave demodulation technique.

    PubMed

    Wang, Linglan; Yan, Yuchao; Ma, Huilian; Jin, Zhonghe

    2016-04-20

    New developments are made in the resonant fiber optic gyro (RFOG), which is an optical sensor for the measurement of rotation rate. The digital signal processing system based on the phase modulation technique is capable of detecting the weak frequency difference induced by the Sagnac effect and suppressing the reciprocal noise in the circuit, which determines the detection sensitivity of the RFOG. A new technique based on the sinusoidal wave modulation and square wave demodulation is implemented, and the demodulation curve of the system is simulated and measured. Compared with the past technique using sinusoidal modulation and demodulation, it increases the slope of the demodulation curve by a factor of 1.56, improves the spectrum efficiency of the modulated signal, and reduces the occupancy of the field-programmable gate array resource. On the basis of this new phase modulation technique, the loop is successfully locked and achieves a short-term bias stability of 1.08°/h, which is improved by a factor of 1.47.

  6. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  7. The design and implementation of hydrographical information management system (HIMS)

    NASA Astrophysics Data System (ADS)

    Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming

    2005-10-01

    With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.

  8. Second principle approach to the analysis of unsteady flow and heat transfer in a tube with arc-shaped corrugation

    NASA Astrophysics Data System (ADS)

    Pagliarini, G.; Vocale, P.; Mocerino, A.; Rainieri, S.

    2017-01-01

    Passive convective heat transfer enhancement techniques are well known and widespread tool for increasing the efficiency of heat transfer equipment. In spite of the ability of the first principle approach to forecast the macroscopic effects of the passive techniques for heat transfer enhancement, namely the increase of both the overall heat exchanged and the head losses, a first principle analysis based on energy, momentum and mass local conservation equations is hardly able to give a comprehensive explanation of how local modifications in the boundary layers contribute to the overall effect. A deeper insight on the heat transfer enhancement mechanisms can be instead obtained within a second principle approach, through the analysis of the local exergy dissipation phenomena which are related to heat transfer and fluid flow. To this aim, the analysis based on the second principle approach implemented through a careful consideration of the local entropy generation rate seems the most suitable, since it allows to identify more precisely the cause of the loss of efficiency in the heat transfer process, thus providing a useful guide in the choice of the most suitable heat transfer enhancement techniques.

  9. Development testing of large volume water sprays for warm fog dispersal

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.

    1986-01-01

    A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.

  10. Implementation of QoSS (Quality-of-Security Service) for NoC-Based SoC Protection

    NASA Astrophysics Data System (ADS)

    Sepúlveda, Johanna; Pires, Ricardo; Strum, Marius; Chau, Wang Jiang

    Many of the current electronic systems embedded in a SoC (System-on-Chip) are used to capture, store, manipulate and access critical data, as well as to perform other key functions. In such a scenario, security is considered as an important issue. The Network-on-chip (NoC), as the foreseen communication structure of next-generation SoC devices, can be used to efficiently incorporate security. Our work proposes the implementation of QoSS (Quality of Security Service) to overcome present SoC vulnerabilities. QoSS is a novel concept for data protection that introduces security as a dimension of QoS. In this paper, we present the implementation of two security services (access control and authentication), that may be configured to assume one from several possible levels, the implementation of a technique to avoid denial-of-service (DoS) attacks, evaluate their effectiveness and estimate their impact on NoC performance.

  11. Development of evaluation technique of GMAW welding quality based on statistical analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shengqiang; Terasaki, Hidenri; Komizo, Yuichi; Hu, Shengsun; Chen, Donggao; Ma, Zhihua

    2014-11-01

    Nondestructive techniques for appraising gas metal arc welding(GMAW) faults plays a very important role in on-line quality controllability and prediction of the GMAW process. On-line welding quality controllability and prediction have several disadvantages such as high cost, low efficiency, complication and greatly being affected by the environment. An enhanced, efficient evaluation technique for evaluating welding faults based on Mahalanobis distance(MD) and normal distribution is presented. In addition, a new piece of equipment, designated the weld quality tester(WQT), is developed based on the proposed evaluation technique. MD is superior to other multidimensional distances such as Euclidean distance because the covariance matrix used for calculating MD takes into account correlations in the data and scaling. The values of MD obtained from welding current and arc voltage are assumed to follow a normal distribution. The normal distribution has two parameters: the mean µ and standard deviation σ of the data. In the proposed evaluation technique used by the WQT, values of MD located in the range from zero to µ+3 σ are regarded as "good". Two experiments which involve changing the flow of shielding gas and smearing paint on the surface of the substrate are conducted in order to verify the sensitivity of the proposed evaluation technique and the feasibility of using WQT. The experimental results demonstrate the usefulness of the WQT for evaluating welding quality. The proposed technique can be applied to implement the on-line welding quality controllability and prediction, which is of great importance to design some novel equipment for weld quality detection.

  12. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  13. Gain in computational efficiency by vectorization in the dynamic simulation of multi-body systems

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Shareef, N. H.

    1991-01-01

    An improved technique for the identification and extraction of the exact quantities associated with the degrees of freedom at the element as well as the flexible body level is presented. It is implemented in the dynamic equations of motions based on the recursive formulation of Kane et al. (1987) and presented in a matrix form, integrating the concepts of strain energy, the finite-element approach, modal analysis, and reduction of equations. This technique eliminates the CPU intensive matrix multiplication operations in the code's hot spots for the dynamic simulation of the interconnected rigid and flexible bodies. A study of a simple robot with flexible links is presented by comparing the execution times on a scalar machine and a vector-processor with and without vector options. Performance figures demonstrating the substantial gains achieved by the technique are plotted.

  14. Dynamic test input generation for multiple-fault isolation

    NASA Technical Reports Server (NTRS)

    Schaefer, Phil

    1990-01-01

    Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.

  15. Testing radon mitigation techniques in a pilot house from Băiţa-Ştei radon prone area (Romania).

    PubMed

    Cosma, Constantin; Papp, Botond; Cucoş Dinu, Alexandra; Sainz, Carlos

    2015-02-01

    This work presents the implementation and testing of several radon mitigation techniques in a pilot house in the radon prone area of Băiţa-Ştei in NW part of Romania. Radon diagnostic investigations in the pilot house showed that the main source of radon was the building sub-soil and the soil near the house. The applied techniques were based on the depressurization and pressurization of the building sub-soil, on the combination of the soil depressurization system by an electric and an eolian fans. Also, there was made an application of a radon barrier membrane and a testing by the combination of the radon membrane by the soil depressurization system. Finally, the better obtained remedial efficiency was about 85%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A tire contact solution technique

    NASA Technical Reports Server (NTRS)

    Tielking, J. T.

    1983-01-01

    An efficient method for calculating the contact boundary and interfacial pressure distribution was developed. This solution technique utilizes the discrete Fourier transform to establish an influence coefficient matrix for the portion of the pressurized tire surface that may be in the contact region. This matrix is used in a linear algebra algorithm to determine the contact boundary and the array of forces within the boundary that are necessary to hold the tire in equilibrium against a specified contact surface. The algorithm also determines the normal and tangential displacements of those points on the tire surface that are included in the influence coefficient matrix. Displacements within and outside the contact region are calculated. The solution technique is implemented with a finite-element tire model that is based on orthotropic, nonlinear shell of revolution elements which can respond to nonaxisymmetric loads. A sample contact solution is presented.

  17. Quantum optimization for training support vector machines.

    PubMed

    Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo

    2003-01-01

    Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.

  18. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  19. DNA-Cryptography-Based Obfuscated Systolic Finite Field Multiplier for Secure Cryptosystem in Smart Grid

    NASA Astrophysics Data System (ADS)

    Chen, Shaobo; Chen, Pingxiuqi; Shao, Qiliang; Basha Shaik, Nazeem; Xie, Jiafeng

    2017-05-01

    The elliptic curve cryptography (ECC) provides much stronger security per bits compared to the traditional cryptosystem, and hence it is an ideal role in secure communication in smart grid. On the other side, secure implementation of finite field multiplication over GF(2 m ) is considered as the bottle neck of ECC. In this paper, we present a novel obfuscation strategy for secure implementation of systolic field multiplier for ECC in smart grid. First, for the first time, we propose a novel obfuscation technique to derive a novel obfuscated systolic finite field multiplier for ECC implementation. Then, we employ the DNA cryptography coding strategy to obfuscate the field multiplier further. Finally, we obtain the area-time-power complexity of the proposed field multiplier to confirm the efficiency of the proposed design. The proposed design is highly obfuscated with low overhead, suitable for secure cryptosystem in smart grid.

  20. Image segmentation based upon topological operators: real-time implementation case study

    NASA Astrophysics Data System (ADS)

    Mahmoudi, R.; Akil, M.

    2009-02-01

    In miscellaneous applications of image treatment, thinning and crest restoring present a lot of interests. Recommended algorithms for these procedures are those able to act directly over grayscales images while preserving topology. But their strong consummation in term of time remains the major disadvantage in their choice. In this paper we present an efficient hardware implementation on RISC processor of two powerful algorithms of thinning and crest restoring developed by our team. Proposed implementation enhances execution time. A chain of segmentation applied to medical imaging will serve as a concrete example to illustrate the improvements brought thanks to the optimization techniques in both algorithm and architectural levels. The particular use of the SSE instruction set relative to the X86_32 processors (PIV 3.06 GHz) will allow a best performance for real time processing: a cadency of 33 images (512*512) per second is assured.

  1. Potential implementation of reservoir computing models based on magnetic skyrmions

    NASA Astrophysics Data System (ADS)

    Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin

    2018-05-01

    Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.

  2. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  3. Green Building Implementation at Schools in North Sulawesi, Indonesia

    NASA Astrophysics Data System (ADS)

    Harimu, D. A. J.; Tumanduk, M. S. S. S.

    2018-02-01

    This research aims at investigating the green building implementation at schools in North Sulawesi, Indonesia; and to analysis the relationship between implementation of green building concept at school with students’ green behaviour. This research is Survey Research with quantitative descriptive method. The analysis unit is taken purposively, that is school that had been implemented the green building concept, Manado’s 3rd Public Vocational High School, Lokon High School at Tomohon, Manado Independent School at North Minahasa, and Tondano’s 3rd Public Vocational High School. Data collecting is acquired by observation and questionnaire. The Assessment Criteria of green building on Analysis Unit, is taken from Greenship Existing Building ver 1. There are 4 main points that being assessed, which are Energy Conservation and Efficiency; Water Conservation; Indoor Health and Comfort; Waste Managerial. The Analysis technique used in this research is the simple regression analysis. The result of the research shows that there is a significant relation between green building implementation at school and students’ green behavior. The result is accordance with the Gesalts Psychologist theories, that architecture can change the user’s behaviour.

  4. Quantum optimal control with automatic differentiation using graphics processors

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Chakram, Srivatsan; Naik, Ravi; Groszkowski, Peter; Koch, Jens; Schuster, David

    We implement quantum optimal control based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them into the optimization process with ease. We will describe efficient techniques to optimally control weakly anharmonic systems that are commonly encountered in circuit QED, including coupled superconducting transmon qubits and multi-cavity circuit QED systems. These systems allow for a rich variety of control schemes that quantum optimal control is well suited to explore.

  5. Methodology to identify risk-significant components for inservice inspection and testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, M.T.; Hartley, R.S.; Jones, J.L. Jr.

    1992-08-01

    Periodic inspection and testing of vital system components should be performed to ensure the safe and reliable operation of Department of Energy (DOE) nuclear processing facilities. Probabilistic techniques may be used to help identify and rank components by their relative risk. A risk-based ranking would allow varied DOE sites to implement inspection and testing programs in an effective and cost-efficient manner. This report describes a methodology that can be used to rank components, while addressing multiple risk issues.

  6. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  7. Event reweighting with the NuWro neutrino interaction generator

    NASA Astrophysics Data System (ADS)

    Pickering, Luke; Stowell, Patrick; Sobczyk, Jan

    2017-09-01

    Event reweighting has been implemented in the NuWro neutrino event generator for a number of free theory parameters in the interaction model. Event reweighting is a key analysis technique, used to efficiently study the effect of neutrino interaction model uncertainties. This opens up the possibility for NuWro to be used as a primary event generator by experimental analysis groups. A preliminary model tuning to ANL and BNL data of quasi-elastic and single pion production events was performed to validate the reweighting engine.

  8. In-vivo study of blood flow in capillaries using μPIV method

    NASA Astrophysics Data System (ADS)

    Kurochkin, Maxim A.; Fedosov, Ivan V.; Tuchin, Valery V.

    2014-01-01

    A digital optical system for intravital capillaroscopy has been developed. It implements the particle image velocimetry (PIV) based approach for measurements of red blood cells velocity in individual capillary of human nailfold. We propose to use a digital real time stabilization technique for compensation of impact of involuntary movements of a finger on results of measurements. Image stabilization algorithm is based on correlation of feature tracking. The efficiency of designed image stabilization algorithm was experimentally demonstrated.

  9. ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.

    2017-12-01

    The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access

  10. Design and implementation of robust controllers for a gait trainer.

    PubMed

    Wang, F C; Yu, C H; Chou, T Y

    2009-08-01

    This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.

  11. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  12. Study of efficient video compression algorithms for space shuttle applications

    NASA Technical Reports Server (NTRS)

    Poo, Z.

    1975-01-01

    Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.

  13. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  14. Algorithms for tensor network renormalization

    NASA Astrophysics Data System (ADS)

    Evenbly, G.

    2017-01-01

    We discuss in detail algorithms for implementing tensor network renormalization (TNR) for the study of classical statistical and quantum many-body systems. First, we recall established techniques for how the partition function of a 2 D classical many-body system or the Euclidean path integral of a 1 D quantum system can be represented as a network of tensors, before describing how TNR can be implemented to efficiently contract the network via a sequence of coarse-graining transformations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statistical and 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracy over sustained coarse-graining transformations, even at a critical point, is demonstrated.

  15. LS-DYNA Implementation of Polymer Matrix Composite Model Under High Strain Rate Impact

    NASA Technical Reports Server (NTRS)

    Zheng, Xia-Hua; Goldberg, Robert K.; Binienda, Wieslaw K.; Roberts, Gary D.

    2003-01-01

    A recently developed constitutive model is implemented into LS-DYNA as a user defined material model (UMAT) to characterize the nonlinear strain rate dependent behavior of polymers. By utilizing this model within a micromechanics technique based on a laminate analogy, an algorithm to analyze the strain rate dependent, nonlinear deformation of a fiber reinforced polymer matrix composite is then developed as a UMAT to simulate the response of these composites under high strain rate impact. The models are designed for shell elements in order to ensure computational efficiency. Experimental and numerical stress-strain curves are compared for two representative polymers and a representative polymer matrix composite, with the analytical model predicting the experimental response reasonably well.

  16. Computationally efficient finite-difference modal method for the solution of Maxwell's equations.

    PubMed

    Semenikhin, Igor; Zanuccoli, Mauro

    2013-12-01

    In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.

  17. Membrane-Based Technologies in the Pharmaceutical Industry and Continuous Production of Polymer-Coated Crystals/Particles.

    PubMed

    Chen, Dengyue; Sirkar, Kamalesh K; Jin, Chi; Singh, Dhananjay; Pfeffer, Robert

    2017-01-01

    Membrane technologies are of increasing importance in a variety of separation and purification applications involving liquid phases and gaseous mixtures. Although the most widely used applications at this time are in water treatment including desalination, there are many applications in chemical, food, healthcare, paper and petrochemical industries. This brief review is concerned with existing and emerging applications of various membrane technologies in the pharmaceutical and biopharmaceutical industry. The goal of this review article is to identify important membrane processes and techniques which are being used or proposed to be used in the pharmaceutical and biopharmaceutical operations. How novel membrane processes can be useful for delivery of crystalline/particulate drugs is also of interest. Membrane separation technologies are extensively used in downstream processes for bio-pharmaceutical separation and purification operations via microfiltration, ultrafiltration and diafiltration. Also the new technique of membrane chromatography allows efficient purification of monoclonal antibodies. Membrane filtration techniques of reverse osmosis and nanofiltration are being combined with bioreactors and advanced oxidation processes to treat wastewaters from pharmaceutical plants. Nanofiltration with organic solvent-stable membranes can implement solvent exchange and catalyst recovery during organic solvent-based drug synthesis of pharmaceutical compounds/intermediates. Membranes in the form of hollow fibers can be conveniently used to implement crystallization of pharmaceutical compounds. The novel crystallization methods of solid hollow fiber cooling crystallizer (SHFCC) and porous hollow fiber anti-solvent crystallization (PHFAC) are being developed to provide efficient methods for continuous production of polymer-coated drug crystals in the area of drug delivery. This brief review provides a general introduction to various applications of membrane technologies in the pharmaceutical/biopharmaceutical industry with special emphasis on novel membrane techniques for pharmaceutical applications. The method of coating a drug particle with a polymer using the SHFCC method is stable and ready for scale-up for operation over an extended period. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. On-chip integrated functional near infra-red spectroscopy (fNIRS) photoreceiver for portable brain imaging

    NASA Astrophysics Data System (ADS)

    Kamrani, Ehsan

    Optical brain imaging using functional near infra-red spectroscopy (fNIRS) offers a direct and noninvasive tool for monitoring of blood oxygenation. fNIRS is a noninvasive, safe, minimally intrusive, and high temporal-resolution technique for real-time and long-term brain imaging. It allows detecting both fast-neuronal and slow-hemodynamic signals. Besides the significant advantages of fNIRS systems, they still suffer from few drawbacks including low spatial-resolution, moderately high-level noise and high-sensitivity to movement. In order to overcome the limitations of currently available non-portable fNIRS systems, we have introduced a new low-power, miniaturized on-chip photodetector front-end intended for portable fNIRS systems. It includes silicon avalanche photodiode (SiAPD), Transimpedance amplifier (TIA), and Quench- Reset circuitry implemented using standard CMOS technologies to operate in both linear and Geiger modes. So it can be applied for both continuous-wave fNIRS (CW-fNIRS) and also single-photon counting applications. Several SiAPDs have been implemented in novel structures and shapes (Rectangular, Octagonal, Dual, Nested, Netted, Quadratic and Hexadecagonal) using different premature edge breakdown prevention techniques. The main characteristics of the SiAPDs are validated and the impact of each parameter and the device simulators (TCAD, COMSOL, etc.) have been studied based on the simulation and measurement results. Proposed techniques exhibit SiAPDs with high avalanche-gain (up to 119), low breakdown-voltage (around 12V) and high photon-detection efficiency (up to 72% in NIR region) in additional to a low dark-count rate (down to 30Hz at 1V excess bias voltage). Three new high gain-bandwidth product (GBW) and low-noise TIAs are introduced and implemented based on distributed-gain concept, logarithmic-amplification and automatic noise-rejection and have been applied in linear-mode of operation. The implemented TIAs offer a power-consumption around 0.4 mW, transimpedance gain of 169 dBO, and input-output current/voltage noises in fA/pV range accompanied with ability to tune the gain, bandwidth and power-consumption in a wide range. The implemented mixed quench-reset circuit (MQC) and controllable MQC (CMQC) front-ends offer a quench-time of 10ns, a maximum power-consumption of 0.4 mW, with a controllable hold-off and reset-times. The on-chip integration of SiAPDs with TIA and photon-counting circuitries has been demonstrated showing improvement of the photodetection-efficiency, specially regarding to the sensitivity, power-consumption and signal-to-noise ratio (SNR) characteristics.

  19. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  20. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  1. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  2. Improved adjoin-list for quality-guided phase unwrapping based on red-black trees

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Rueda-Paz, Juvenal; Redondo-Galvan, Arturo

    2016-08-01

    The quality-guide phase unwrapping is an important technique that is based on quality maps which guide the unwrapping process. The efficiency of this technique depends in the adjoin-list data structure implementation. There exists several proposals that improve the adjoin-list; Ming Zhao et. al. proposed an Indexed Interwoven Linked List (I2L2) that is based on dividing the quality values into intervals of equal size and inserting in a linked list those pixels with quality values within a certain interval. Ming Zhao and Qian Kemao proposed an improved I2L2 replacing each linked list in each interval by a heap data structure, which allows efficient procedures for insertion and deletion. In this paper, we propose an improved I2L2 which uses Red-Black trees (RBT) data structures for each interval. Our proposal has as main goal to avoid the unbalanced properties of the head and thus, reducing the time complexity of insertion. In order to maintain the same efficiency of the heap when deleting an element, we provide an efficient way to remove the pixel with the highest quality value in the RBT using a pointer to the rightmost element in the tree. We also provide a new partition strategy of the phase values that is based on a density criterion. Experimental results applied to phase shifting profilometry are shown for large images.

  3. High efficiency H6 single-phase transformerless grid-tied PV inverter with proposed modulation for reactive power generation

    NASA Astrophysics Data System (ADS)

    Almasoudi, Fahad M.; Alatawi, Khaled S.; Matin, Mohammad

    2017-08-01

    Implementation of transformerless inverters in PV grid-tied system offer great benefits such as high efficiency, light weight, low cost, etc. Most of the proposed transformerless inverters in literature are verified for only real power application. Currently, international standards such as VDE-AR-N 4105 has demanded that PV grid-tied inverters should have the ability of controlling a specific amount of reactive power. Generation of reactive power cannot be accomplished in single phase transformerless inverter topologies because the existing modulation techniques are not adopted for a freewheeling path in the negative power region. This paper enhances a previous high efficiency proposed H6 trnasformerless inverter with SiC MOSFETs and demonstrates new operating modes for the generation of reactive power. A proposed pulse width modulation (PWM) technique is applied to achieve bidirectional current flow through freewheeling state. A comparison of the proposed H6 transformerless inverter using SiC MOSFETs and Si MOSFTEs is presented in terms of power losses and efficiency. The results show that reactive power control is attained without adding any additional active devices or modification to the inverter structure. Also, the proposed modulation maintains a constant common mode voltage (CM) during every operating mode and has low leakage current. The performance of the proposed system verifies its effectiveness in the next generation PV system.

  4. Characterization of silicon photomultipliers and validation of the electrical model

    NASA Astrophysics Data System (ADS)

    Peng, Peng; Qiang, Yi; Ross, Steve; Burr, Kent

    2018-04-01

    This paper introduces a systematic way to measure most features of the silicon photomultipliers (SiPM). We implement an efficient two-laser procedure to measure the recovery time. Avalanche probability was found to play an important role in explaining the right behavior of the SiPM recovery process. Also, we demonstrate how equivalent circuit parameters measured by optical tests can be used in SPICE modeling to predict details of the time constants relevant to the pulse shape. The SiPM properties measured include breakdown voltage, gain, diode capacitor, quench resistor, quench capacitor, dark count rate, photodetection efficiency, cross-talk and after-pulsing probability, and recovery time. We apply these techniques on the SiPMs from two companies: Hamamatsu and SensL.

  5. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Naik, Vijay K.

    1988-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  6. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Naik, V. K.

    1985-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  7. Three-dimensional optical memory systems based on photochromic materials: polarization control of two-color data writing and the possibility of nondestructive data reading

    NASA Astrophysics Data System (ADS)

    Akimov, D. A.; Fedotov, Andrei B.; Koroteev, Nikolai I.; Magnitskii, S. A.; Naumov, A. N.; Sidorov-Biryukov, Dmitri A.; Sokoluk, N. T.; Zheltikov, Alexei M.

    1998-04-01

    The possibilities of optimizing data writing and reading in devices of 3D optical memory using photochromic materials are discussed. We quantitatively analyze linear and nonlinear optical properties of induline spiropyran molecules, which allows us to estimate the efficiency of using such materials for implementing 3D optical-memory devices. It is demonstrated that, with an appropriate choice of polarization vectors of laser beams, one can considerably improve the efficiency of two-photon writing in photochromic materials. The problem of reading the data stored in a photochromic material is analyzed. The possibilities of data reading methods with the use of fluorescence and four-photon techniques are compared.

  8. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  9. A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. S.

    1992-01-01

    A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.

  10. High Step-Up DC—DC Converter for AC Photovoltaic Module with MPPT Control

    NASA Astrophysics Data System (ADS)

    Sundar, Govindasamy; Karthick, Narashiman; Rama Reddy, Sasi

    2014-08-01

    This paper presents the high gain step-up BOOST converter which is essential to step up the low output voltage from PV panel to the high voltage according to the requirement of the application. In this paper a high gain BOOST converter with coupled inductor technique is proposed with the MPPT control. Without extreme duty ratios and the numerous turns-ratios of a coupled inductor this converter achieves a high step-up voltage-conversion ratio and the leakage energy of the coupled inductor is efficiently recycled to the load. MPPT control used to extract the maximum power from PV panel by controlling the Duty ratio of the converter. The PV panel, BOOST converter and the MPPT are modeled using Sim Power System blocks in MATLAB/SIMULINK environment. The prototype model of the proposed converter has been implemented with the maximum measured efficiency is up to 95.4% and full-load efficiency is 93.1%.

  11. Massively parallel multicanonical simulations

    NASA Astrophysics Data System (ADS)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  12. Ffuzz: Towards full system high coverage fuzz testing on binary executables.

    PubMed

    Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.

  13. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    NASA Astrophysics Data System (ADS)

    Potekhin, M.; ATLAS Collaboration

    2012-06-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  14. A European mobile satellite system concept exploiting CDMA and OBP

    NASA Technical Reports Server (NTRS)

    Vernucci, A.; Craig, A. D.

    1993-01-01

    This paper describes a novel Land Mobile Satellite System (LMSS) concept applicable to networks allowing access to a large number of gateway stations ('Hubs'), utilizing low-cost Very Small Aperture Terminals (VSAT's). Efficient operation of the Forward-Link (FL) repeater can be achieved by adopting a synchronous Code Division Multiple Access (CDMA) technique, whereby inter-code interference (self-noise) is virtually eliminated by synchronizing orthogonal codes. However, with a transparent FL repeater, the requirements imposed by the highly decentralized ground segment can lead to significant efficiency losses. The adoption of a FL On-Board Processing (OBP) repeater is proposed as a means of largely recovering this efficiency impairment. The paper describes the network architecture, the system design and performance, the OBP functions and impact on implementation. The proposed concept, applicable to a future generation of the European LMSS, was developed in the context of a European Space Agency (ESA) study contract.

  15. Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography

    PubMed Central

    Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.

    2017-01-01

    We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454

  16. High efficiency laser-assisted H - charge exchange for microsecond duration beams

    DOE PAGES

    Cousineau, Sarah; Rakhman, Abdurahim; Kay, Martin; ...

    2017-12-26

    Laser-assisted stripping is a novel approach to H - charge exchange that overcomes long-standing limitations associated with the traditional, foil-based method of producing high-intensity, time-structured beams of protons. This paper reports on the first successful demonstration of the laser stripping technique for microsecond duration beams. The experiment represents a factor of 1000 increase in the stripped pulse duration compared with the previous proof-of-principle demonstration. The central theme of the experiment is the implementation of methods to reduce the required average laser power such that high efficiency stripping can be accomplished for microsecond duration beams using conventional laser technology. In conclusion,more » the experiment was performed on the Spallation Neutron Source 1 GeV H - beam using a 1 MW peak power UV laser and resulted in ~95% stripping efficiency.« less

  17. High efficiency laser-assisted H - charge exchange for microsecond duration beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cousineau, Sarah; Rakhman, Abdurahim; Kay, Martin

    Laser-assisted stripping is a novel approach to H - charge exchange that overcomes long-standing limitations associated with the traditional, foil-based method of producing high-intensity, time-structured beams of protons. This paper reports on the first successful demonstration of the laser stripping technique for microsecond duration beams. The experiment represents a factor of 1000 increase in the stripped pulse duration compared with the previous proof-of-principle demonstration. The central theme of the experiment is the implementation of methods to reduce the required average laser power such that high efficiency stripping can be accomplished for microsecond duration beams using conventional laser technology. In conclusion,more » the experiment was performed on the Spallation Neutron Source 1 GeV H - beam using a 1 MW peak power UV laser and resulted in ~95% stripping efficiency.« less

  18. Authentication techniques for smart cards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R.A.

    1994-02-01

    Smart card systems are most cost efficient when implemented as a distributed system, which is a system without central host interaction or a local database of card numbers for verifying transaction approval. A distributed system, as such, presents special card and user authentication problems. Fortunately, smart cards offer processing capabilities that provide solutions to authentication problems, provided the system is designed with proper data integrity measures. Smart card systems maintain data integrity through a security design that controls data sources and limits data changes. A good security design is usually a result of a system analysis that provides a thoroughmore » understanding of the application needs. Once designers understand the application, they may specify authentication techniques that mitigate the risk of system compromise or failure. Current authentication techniques include cryptography, passwords, challenge/response protocols, and biometrics. The security design includes these techniques to help prevent counterfeit cards, unauthorized use, or information compromise. This paper discusses card authentication and user identity techniques that enhance security for microprocessor card systems. It also describes the analysis process used for determining proper authentication techniques for a system.« less

  19. Image space subdivision for fast ray tracing

    NASA Astrophysics Data System (ADS)

    Yu, Billy T.; Yu, William W.

    1999-09-01

    Ray-tracing is notorious of its computational requirement. There were a number of techniques to speed up the process. However, a famous statistic indicated that ray-object intersections occupies over 95% of the total image generation time. Thus, it is most beneficial to work on this bottle-neck. There were a number of ray-object intersection reduction techniques and they could be classified into three major categories: bounding volume hierarchies, space subdivision, and directional subdivision. This paper introduces a technique falling into the third category. To further speed up the process, it takes advantages of hierarchy by adopting a MX-CIF quadtree in the image space. This special kind of quadtree provides simple objects allocation and ease of implementation. The text also included a theoretical proof of the expected performance. For ray-polygon comparison, the technique reduces the order of complexity from linear to square-root, O(n) -> O(2(root)n). Experiments with various shape, size and complexity were conducted to verify the expectation. Results shown that computational improvement grew with the complexity of the sceneries. The experimental improvement was more than 90% and it agreed with the theoretical value when the number of polygons exceeded 3000. The more complex was the scene, the more efficient was the acceleration. The algorithm described was implemented in the polygonal level, however, it could be easily enhanced and extended to the object or higher levels.

  20. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  1. MSC/NASTRAN DMAP Alter Used for Closed-Form Static Analysis With Inertia Relief and Displacement-Dependent Loads

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is a common task in the aerospace industry. Often, these problems are solved by static analysis with inertia relief. This technique allows for a free-free static analysis by balancing the applied loads with the inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus the displacement-dependent loads. A launch vehicle being acted upon by an aerodynamic loading can have such applied loads. The final displacements of such systems are commonly determined with iterative solution techniques. Unfortunately, these techniques can be time consuming and labor intensive. Because the coupled system equations for free-free systems with displacement-dependent loads can be written in closed form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. An MSC/NASTRAN (MacNeal-Schwendler Corporation/NASA Structural Analysis) DMAP (Direct Matrix Abstraction Program) Alter was used to include displacement-dependent loads in static analysis with inertia relief. It efficiently solved a common aerospace problem that typically has been solved with an iterative technique.

  2. Custom instruction set NIOS-based OFDM processor for FPGAs

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Sunkara, Divya; Castillo, Encarnacion; Garcia, Antonio

    2006-05-01

    Orthogonal Frequency division multiplexing (OFDM) spread spectrum technique, sometimes also called multi-carrier or discrete multi-tone modulation, are used in bandwidth-efficient communication systems in the presence of channel distortion. The benefits of OFDM are high spectral efficiency, resiliency to RF interference, and lower multi-path distortion. OFDM is the basis for the European digital audio broadcasting (DAB) standard, the global asymmetric digital subscriber line (ADSL) standard, in the IEEE 802.11 5.8 GHz band standard, and ongoing development in wireless local area networks. The modulator and demodulator in an OFDM system can be implemented by use of a parallel bank of filters based on the discrete Fourier transform (DFT), in case the number of subchannels is large (e.g. K > 25), the OFDM system are efficiently implemented by use of the fast Fourier transform (FFT) to compute the DFT. We have developed a custom FPGA-based Altera NIOS system to increase the performance, programmability, and low power in mobil wireless systems. The overall gain observed for a 1024-point FFT ranges depending on the multiplier used by the NIOS processor between a factor of 3 and 16. A careful optimization described in the appendix yield a performance gain of up to 77% when compared with our preliminary results.

  3. A Planar Microfluidic Mixer Based on Logarithmic Spirals

    PubMed Central

    Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Park, Daniel Sang-Won; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy

    2013-01-01

    A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3-D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes, and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing. PMID:23956497

  4. A planar microfluidic mixer based on logarithmic spirals

    NASA Astrophysics Data System (ADS)

    Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Sang-Won Park, Daniel; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy; Monroe, W. Todd

    2012-05-01

    A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as the Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional (3D) simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing.

  5. A hybrid indoor ambient light and vibration energy harvester for wireless sensor nodes.

    PubMed

    Yu, Hua; Yue, Qiuqin; Zhou, Jielin; Wang, Wei

    2014-05-19

    To take advantage of applications where both light and vibration energy are available, a hybrid indoor ambient light and vibration energy harvesting scheme is proposed in this paper. This scheme uses only one power conditioning circuit to condition the combined output power harvested from both energy sources so as to reduce the power dissipation. In order to more accurately predict the instantaneous power harvested from the solar panel, an improved five-parameter model for small-scale solar panel applying in low light illumination is presented. The output voltage is increased by using the MEMS piezoelectric cantilever arrays architecture. It overcomes the disadvantage of traditional MEMS vibration energy harvester with low voltage output. The implementation of the maximum power point tracking (MPPT) for indoor ambient light is implemented using analog discrete components, which improves the whole harvester efficiency significantly compared to the digital signal processor. The output power of the vibration energy harvester is improved by using the impedance matching technique. An efficient mechanism of energy accumulation and bleed-off is also discussed. Experiment results obtained from an amorphous-silicon (a-Si) solar panel of 4.8 × 2.0 cm2 and a fabricated piezoelectric MEMS generator of 11 × 12.4 mm2 show that the hybrid energy harvester achieves a maximum efficiency around 76.7%.

  6. Designing overall stoichiometric conversions and intervening metabolic reactions

    DOE PAGES

    Chowdhury, Anupam; Maranas, Costas D.

    2015-11-04

    Existing computational tools for de novo metabolic pathway assembly, either based on mixed integer linear programming techniques or graph-search applications, generally only find linear pathways connecting the source to the target metabolite. The overall stoichiometry of conversion along with alternate co-reactant (or co-product) combinations is not part of the pathway design. Therefore, global carbon and energy efficiency is in essence fixed with no opportunities to identify more efficient routes for recycling carbon flux closer to the thermodynamic limit. Here, we introduce a two-stage computational procedure that both identifies the optimum overall stoichiometry (i.e., optStoic) and selects for (non-)native reactions (i.e.,more » minRxn/minFlux) that maximize carbon, energy or price efficiency while satisfying thermodynamic feasibility requirements. Implementation for recent pathway design studies identified non-intuitive designs with improved efficiencies. Specifically, multiple alternatives for non-oxidative glycolysis are generated and non-intuitive ways of co-utilizing carbon dioxide with methanol are revealed for the production of C 2+ metabolites with higher carbon efficiency.« less

  7. An Efficient Framework for Compressed Sensing Reconstruction of Highly Accelerated Dynamic Cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ting, Samuel T.

    The research presented in this work seeks to develop, validate, and deploy practical techniques for improving diagnosis of cardiovascular disease. In the philosophy of biomedical engineering, we seek to identify an existing medical problem having significant societal and economic effects and address this problem using engineering approaches. Cardiovascular disease is the leading cause of mortality in the United States, accounting for more deaths than any other major cause of death in every year since 1900 with the exception of the year 1918. Cardiovascular disease is estimated to account for almost one-third of all deaths in the United States, with more than 2150 deaths each day, or roughly 1 death every 40 seconds. In the past several decades, a growing array of imaging modalities have proven useful in aiding the diagnosis and evaluation of cardiovascular disease, including computed tomography, single photon emission computed tomography, and echocardiography. In particular, cardiac magnetic resonance imaging is an excellent diagnostic tool that can provide within a single exam a high quality evaluation of cardiac function, blood flow, perfusion, viability, and edema without the use of ionizing radiation. The scope of this work focuses on the application of engineering techniques for improving imaging using cardiac magnetic resonance with the goal of improving the utility of this powerful imaging modality. Dynamic cine imaging, or the capturing of movies of a single slice or volume within the heart or great vessel region, is used in nearly every cardiac magnetic resonance imaging exam, and adequate evaluation of cardiac function and morphology for diagnosis and evaluation of cardiovascular disease depends heavily on both the spatial and temporal resolution as well as the image quality of the reconstruction cine images. This work focuses primarily on image reconstruction techniques utilized in cine imaging; however, the techniques discussed are also relevant to other dynamic and static imaging techniques based on cardiac magnetic resonance. Conventional segmented techniques for cardiac cine imaging require breath-holding as well as regular cardiac rhythm, and can be time-consuming to acquire. Inadequate breath-holding or irregular cardiac rhythm can result in completely non-diagnostic images, limiting the utility of these techniques in a significant patient population. Real-time single-shot cardiac cine imaging enables free-breathing acquisition with significantly shortened imaging time and promises to significantly improve the utility of cine imaging for diagnosis and evaluation of cardiovascular disease. However, utility of real-time cine images depends heavily on the successful reconstruction of final cine images from undersampled data. Successful reconstruction of images from more highly undersampled data results directly in images exhibiting finer spatial and temporal resolution provided that image quality is sufficient. This work focuses primarily on the development, validation, and deployment of practical techniques for enabling the reconstruction of real-time cardiac cine images at the spatial and temporal resolutions and image quality needed for diagnostic utility. Particular emphasis is placed on the development of reconstruction approaches resulting in with short computation times that can be used in the clinical environment. Specifically, the use of compressed sensing signal recovery techniques is considered; such techniques show great promise in allowing successful reconstruction of highly undersampled data. The scope of this work concerns two primary topics related to signal recovery using compressed sensing: (1) long reconstruction times of these techniques, and (2) improved sparsity models for signal recovery from more highly undersampled data. Both of these aspects are relevant to the practical application of compressed sensing techniques in the context of improving image reconstruction of real-time cardiac cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.

  8. Arterial waveguide model for shear wave elastography: implementation and in vitro validation

    NASA Astrophysics Data System (ADS)

    Vaziri Astaneh, Ali; Urban, Matthew W.; Aquino, Wilkins; Greenleaf, James F.; Guddati, Murthy N.

    2017-07-01

    Arterial stiffness is found to be an early indicator of many cardiovascular diseases. Among various techniques, shear wave elastography has emerged as a promising tool for estimating local arterial stiffness through the observed dispersion of guided waves. In this paper, we develop efficient models for the computational simulation of guided wave dispersion in arterial walls. The models are capable of considering fluid-loaded tubes, immersed in fluid or embedded in a solid, which are encountered in in vitro/ex vivo, and in vivo experiments. The proposed methods are based on judiciously combining Fourier transformation and finite element discretization, leading to a significant reduction in computational cost while fully capturing complex 3D wave propagation. The developed methods are implemented in open-source code, and verified by comparing them with significantly more expensive, fully 3D finite element models. We also validate the models using the shear wave elastography of tissue-mimicking phantoms. The computational efficiency of the developed methods indicates the possibility of being able to estimate arterial stiffness in real time, which would be beneficial in clinical settings.

  9. A Proposal for the use of the Consortium Method in the Design-build system

    NASA Astrophysics Data System (ADS)

    Miyatake, Ichiro; Kudo, Masataka; Kawamata, Hiroyuki; Fueta, Toshiharu

    In view of the necessity for efficient implementation of public works projects, it is expected to utilize advanced technical skills of private firms, for the purpose of reducing project costs, improving performance and functions of construction objects, and reducing work periods, etc. The design-build system is a method to order design and construction as a single contract, including design of structural forms and main specifications of the construction object. This is a system in which high techniques of private firms can be utilized, as a means to ensure qualities of design and construction, rational design, and efficiency of the project. The objective of this study is to examine the use of a method to form a consortium of civil engineering consultants and construction companies, as it is an issue related to the implementation of the design-build method. Furthermore, by studying various forms of consortiums to be introduced in future, it proposes procedural items required to utilize this method, during the bid and after signing a contract, such as the estimate submission from the civil engineering consultants etc.

  10. Actors and networks in resource conflict resolution under climate change in rural Kenya

    NASA Astrophysics Data System (ADS)

    Ngaruiya, Grace W.; Scheffran, Jürgen

    2016-05-01

    The change from consensual decision-making arrangements into centralized hierarchical chieftaincy schemes through colonization disrupted many rural conflict resolution mechanisms in Africa. In addition, climate change impacts on land use have introduced additional socio-ecological factors that complicate rural conflict dynamics. Despite the current urgent need for conflict-sensitive adaptation, resolution efficiency of these fused rural institutions has hardly been documented. In this context, we analyse the Loitoktok network for implemented resource conflict resolution structures and identify potential actors to guide conflict-sensitive adaptation. This is based on social network data and processes that are collected using the saturation sampling technique to analyse mechanisms of brokerage. We find that there are three different forms of fused conflict resolution arrangements that integrate traditional institutions and private investors in the community. To effectively implement conflict-sensitive adaptation, we recommend the extension officers, the council of elders, local chiefs and private investors as potential conduits of knowledge in rural areas. In conclusion, efficiency of these fused conflict resolution institutions is aided by the presence of holistic resource management policies and diversification in conflict resolution actors and networks.

  11. Conceptual Design and Optimal Power Control Strategy for AN Eco-Friendly Hybrid Vehicle

    NASA Astrophysics Data System (ADS)

    Nasiri, N. Mir; Chieng, Frederick T. A.

    2011-06-01

    This paper presents a new concept for a hybrid vehicle using a torque and speed splitting technique. It is implemented by the newly developed controller in combination with a two degree of freedom epicyclic gear transmission. This approach enables optimization of the power split between the less powerful electrical motor and more powerful engine while driving a car load. The power split is fundamentally a dual-energy integration mechanism as it is implemented by using the epicyclic gear transmission that has two inputs and one output for a proper power distribution. The developed power split control system manages the operation of both the inputs to have a known output with the condition of maintaining optimum operating efficiency of the internal combustion engine and electrical motor. This system has a huge potential as it is possible to integrate all the features of hybrid vehicle known to-date such as the regenerative braking system, series hybrid, parallel hybrid, series/parallel hybrid, and even complex hybrid (bidirectional). By using the new power split system it is possible to further reduce fuel consumption and increase overall efficiency.

  12. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    NASA Astrophysics Data System (ADS)

    El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.

    2015-10-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.

  13. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  14. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  15. Decontamination of Nuclear Liquid Wastes Status of CEA and AREVA R and D: Application to Fukushima Waste Waters - 12312

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournel, B.; Barre, Y.; Lepeytre, C.

    2012-07-01

    Liquid wastes decontamination processes are mainly based on two techniques: Bulk processes and the so called Cartridges processes. The first technique has been developed for the French nuclear fuel reprocessing industry since the 60's in Marcoule and La Hague. It is a proven and mature technology which has been successfully and quickly implemented by AREVA at Fukushima site for the processing of contaminated waters. The second technique, involving cartridges processes, offers new opportunities for the use of innovative adsorbents. The AREVA process developed for Fukushima and some results obtained on site will be presented as well as laboratory scale resultsmore » obtained in CEA laboratories. Examples of new adsorbents development for liquid wastes decontamination are also given. A chemical process unit based on co-precipitation technique has been successfully and quickly implemented by AREVA at Fukushima site for the processing of contaminated waters. The asset of this technique is its ability to process large volumes in a continuous mode. Several chemical products can be used to address specific radioelements such as: Cs, Sr, Ru. Its drawback is the production of sludge (about 1% in volume of initial liquid volume). CEA developed strategies to model the co-precipitation phenomena in order to firstly minimize the quantity of added chemical reactants and secondly, minimize the size of co-precipitation units. We are on the way to design compact units that could be mobilized very quickly and efficiently in case of an accidental situation. Addressing the problem of sludge conditioning, cementation appears to be a very attractive solution. Fukushima accident has focused attention on optimizations that should be taken into account in future studies: - To better take account for non-typical aqueous matrixes like seawater; - To enlarge the spectrum of radioelements that can be efficiently processed and especially short lives radioelements that are usually less present in standard effluents resulting from nuclear activities; - To develop reversible solid adsorbents for cartridge-type applications in order to minimize wastes. (authors)« less

  16. A case study on implementing lean ergonomic manufacturing systems (LEMS) in an automobile industry

    NASA Astrophysics Data System (ADS)

    Srinivasa Rao, P.; Niraj, Malay

    2016-09-01

    Lean manufacturing is a business strategy developed in Japan. In the present scenario, the global market is developing new techniques for getting more and more production rate with a good quality under low cost. In this context, human factors have to be given importance to their working conditions. The study demonstrates the adoption of ergonomic conditions in lean manufacturing for the improvement of organizational performance of the industry. The aim of ergonomics is to adapt the new techniques to their work in efficient and safe ways in order to optimize the human health conditions and increasing the production rate. By conducting survey on various disciplines and showed how the production rate and human ergonomic conditions is affected.

  17. Iterated Gate Teleportation and Blind Quantum Computation.

    PubMed

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-05

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  18. Operations automation using temporal dependency networks

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    1991-01-01

    Precalibration activities for the Deep Space Network are time- and work force-intensive. Significant gains in availability and efficiency could be realized by intelligently incorporating automation techniques. An approach is presented to automation based on the use of Temporal Dependency Networks (TDNs). A TDN represents an activity by breaking it down into its component pieces and formalizing the precedence and other constraints associated with lower level activities. The representations are described which are used to implement a TDN and the underlying system architecture needed to support its use. The commercial applications of this technique are numerous. It has potential for application in any system which requires real-time, system-level control, and accurate monitoring of health, status, and configuration in an asynchronous environment.

  19. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  20. Wavefront sensing with all-digital Stokes measurements

    NASA Astrophysics Data System (ADS)

    Dudley, Angela; Milione, Giovanni; Alfano, Robert R.; Forbes, Andrew

    2014-09-01

    A long-standing question in optics has been to efficiently measure the phase (or wavefront) of an optical field. This has led to numerous publications and commercial devices such as phase shift interferometry, wavefront reconstruction via modal decomposition and Shack-Hartmann wavefront sensors. In this work we develop a new technique to extract the phase which in contrast to previously mentioned methods is based on polarization (or Stokes) measurements. We outline a simple, all-digital approach using only a spatial light modulator and a polarization grating to exploit the amplitude and phase relationship between the orthogonal states of polarization to determine the phase of an optical field. We implement this technique to reconstruct the phase of static and propagating optical vortices.

  1. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  2. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    NASA Astrophysics Data System (ADS)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  3. Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research

    PubMed Central

    Dandass, Yoginder S; Burgess, Shane C; Lawrence, Mark; Bridges, Susan M

    2008-01-01

    Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA) devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM) is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences). Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM) resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation. PMID:18412963

  4. Control of Smart Building Using Advanced SCADA

    NASA Astrophysics Data System (ADS)

    Samuel, Vivin Thomas

    For complete control of the building, a proper SCADA implementation and the optimization strategy has to be build. For better communication and efficiency a proper channel between the Communication protocol and SCADA has to be designed. This paper concentrate mainly between the communication protocol, and the SCADA implementation, for a better optimization and energy savings is derived to large scale industrial buildings. The communication channel used in order to completely control the building remotely from a distant place. For an efficient result we consider the temperature values and the power ratings of the equipment so that while controlling the equipment, we are setting a threshold values for FDD technique implementation. Building management system became a vital source for any building to maintain it and for safety purpose. Smart buildings, refers to various distinct features, where the complete automation system, office building controls, data center controls. ELC's are used to communicate the load values of the building to the remote server from a far location with the help of an Ethernet communication channel. Based on the demand fluctuation and the peak voltage, the loads operate differently increasing the consumption rate thus results in the increase in the annual consumption bill. In modern days, saving energy and reducing the consumption bill is most essential for any building for a better and long operation. The equipment - monitored regularly and optimization strategy is implemented for cost reduction automation system. Thus results in the reduction of annual cost reduction and load lifetime increase.

  5. A fast immersed boundary method for external incompressible viscous flows using lattice Green's functions

    NASA Astrophysics Data System (ADS)

    Liska, Sebastian; Colonius, Tim

    2017-02-01

    A new parallel, computationally efficient immersed boundary method for solving three-dimensional, viscous, incompressible flows on unbounded domains is presented. Immersed surfaces with prescribed motions are generated using the interpolation and regularization operators obtained from the discrete delta function approach of the original (Peskin's) immersed boundary method. Unlike Peskin's method, boundary forces are regarded as Lagrange multipliers that are used to satisfy the no-slip condition. The incompressible Navier-Stokes equations are discretized on an unbounded staggered Cartesian grid and are solved in a finite number of operations using lattice Green's function techniques. These techniques are used to automatically enforce the natural free-space boundary conditions and to implement a novel block-wise adaptive grid that significantly reduces the run-time cost of solutions by limiting operations to grid cells in the immediate vicinity and near-wake region of the immersed surface. These techniques also enable the construction of practical discrete viscous integrating factors that are used in combination with specialized half-explicit Runge-Kutta schemes to accurately and efficiently solve the differential algebraic equations describing the discrete momentum equation, incompressibility constraint, and no-slip constraint. Linear systems of equations resulting from the time integration scheme are efficiently solved using an approximation-free nested projection technique. The algebraic properties of the discrete operators are used to reduce projection steps to simple discrete elliptic problems, e.g. discrete Poisson problems, that are compatible with recent parallel fast multipole methods for difference equations. Numerical experiments on low-aspect-ratio flat plates and spheres at Reynolds numbers up to 3700 are used to verify the accuracy and physical fidelity of the formulation.

  6. The analysis of composite laminated beams using a 2D interpolating meshless technique

    NASA Astrophysics Data System (ADS)

    Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.

    2018-02-01

    Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.

  7. Development of an evolutionary simulator and an overall control system for intelligent wheelchair

    NASA Astrophysics Data System (ADS)

    Imai, Makoto; Kawato, Koji; Hamagami, Tomoki; Hirata, Hironori

    The goal of this research is to develop an intelligent wheelchair (IWC) system which aids an indoor safe mobility for elderly and disabled people with a new conceptual architecture which realizes autonomy, cooperativeness, and a collaboration behavior. In order to develop the IWC system in real environment, we need design-tools and flexible architecture. In particular, as more significant ones, this paper describes two key techniques which are an evolutionary simulation and an overall control mechanism. The evolutionary simulation technique corrects the error between the virtual environment in a simulator and real one in during the learning of an IWC agent, and coevolves with the agent. The overall control mechanism is implemented with subsumption architecture which is employed in an autonomous robot controller. By using these techniques in both simulations and experiments, we confirm that our IWC system acquires autonomy, cooperativeness, and a collaboration behavior efficiently.

  8. Comparative performance evaluation of transform coding in image pre-processing

    NASA Astrophysics Data System (ADS)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  9. Bristol Ridge: A 28-nm $$\\times$$ 86 Performance-Enhanced Microprocessor Through System Power Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel

    Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less

  10. Energy-efficient process-stacking multiplexing access for 60-GHz mm-wave wireless personal area networks.

    PubMed

    Estevez, Claudio; Kailas, Aravind

    2012-01-01

    Millimeter-wave technology shows high potential for future wireless personal area networks, reaching over 1 Gbps transmissions using simple modulation techniques. Current specifications consider dividing the spectrum into effortlessly separable spectrum ranges. These low requirements open a research area in time and space multiplexing techniques for millimeter-waves. In this work a process-stacking multiplexing access algorithm is designed for single channel operation. The concept is intuitive, but its implementation is not trivial. The key to stacking single channel events is to operate while simultaneously obtaining and handling a-posteriori time-frame information of scheduled events. This information is used to shift a global time pointer that the wireless access point manages and uses to synchronize all serviced nodes. The performance of the proposed multiplexing access technique is lower bounded by the performance of legacy TDMA and can significantly improve the effective throughput. Work is validated by simulation results.

  11. Herbal Extract Incorporated Nanofiber Fabricated by an Electrospinning Technique and its Application to Antimicrobial Air Filtration.

    PubMed

    Choi, Jeongan; Yang, Byeong Joon; Bae, Gwi-Nam; Jung, Jae Hee

    2015-11-18

    Recently, with the increased attention to indoor air quality, antimicrobial air filtration techniques have been studied widely to inactivate hazardous airborne microorganisms effectively. In this study, we demonstrate herbal extract incorporated (HEI) nanofibers synthesized by an electrospinning technique and their application to antimicrobial air filtration. As an antimicrobial herbal material, an ethanolic extract of Sophora flavescens, which exhibits great antibacterial activity against pathogens, was mixed with the polymer solution for the electrospinning process. We measured various characteristics of the synthesized HEI nanofibers, such as fiber morphology, fiber size distribution, and thermal stability. For application of the electrospun HEI nanofibers, we made highly effective air filters with 99.99% filtration efficiency and 99.98% antimicrobial activity against Staphylococcus epidermidis. The pressure drop across the HEI nanofiber air filter was 4.75 mmH2O at a face air velocity of 1.79 cm/s. These results will facilitate the implementation of electrospun HEI nanofiber techniques to control air quality and protect against hazardous airborne microorganisms.

  12. The aggregated unfitted finite element method for elliptic problems

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.

    2018-07-01

    Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.

  13. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  14. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  15. Telecommunication Platforms for Transmitting Sensor Data over Communication Networks-State of the Art and Challenges.

    PubMed

    Staniec, Kamil; Habrych, Marcin

    2016-07-19

    The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect-in particular-the operation of chemical sensors implemented in the network as they often require additional heating.

  16. Power amplifier linearization technique with IQ imbalance and crosstalk compensation for broadband MIMO-OFDM transmitters

    NASA Astrophysics Data System (ADS)

    Gregorio, Fernando; Cousseau, Juan; Werner, Stefan; Riihonen, Taneli; Wichman, Risto

    2011-12-01

    The design of predistortion techniques for broadband multiple input multiple output-OFDM (MIMO-OFDM) systems raises several implementation challenges. First, the large bandwidth of the OFDM signal requires the introduction of memory effects in the PD model. In addition, it is usual to consider an imbalanced in-phase and quadrature (IQ) modulator to translate the predistorted baseband signal to RF. Furthermore, the coupling effects, which occur when the MIMO paths are implemented in the same reduced size chipset, cannot be avoided in MIMO transceivers structures. This study proposes a MIMO-PD system that linearizes the power amplifier response and compensates nonlinear crosstalk and IQ imbalance effects for each branch of the multiantenna system. Efficient recursive algorithms are presented to estimate the complete MIMO-PD coefficients. The algorithms avoid the high computational complexity in previous solutions based on least squares estimation. The performance of the proposed MIMO-PD structure is validated by simulations using a two-transmitter antenna MIMO system. Error vector magnitude and adjacent channel power ratio are evaluated showing significant improvement compared with conventional MIMO-PD systems.

  17. Correlation dynamics and enhanced signals for the identification of serial biomolecules and DNA bases.

    PubMed

    Ahmed, Towfiq; Haraldsen, Jason T; Rehr, John J; Di Ventra, Massimiliano; Schuller, Ivan; Balatsky, Alexander V

    2014-03-28

    Nanopore-based sequencing has demonstrated a significant potential for the development of fast, accurate, and cost-efficient fingerprinting techniques for next generation molecular detection and sequencing. We propose a specific multilayered graphene-based nanopore device architecture for the recognition of single biomolecules. Molecular detection and analysis can be accomplished through the detection of transverse currents as the molecule or DNA base translocates through the nanopore. To increase the overall signal-to-noise ratio and the accuracy, we implement a new 'multi-point cross-correlation' technique for identification of DNA bases or other molecules on the single molecular level. We demonstrate that the cross-correlations between each nanopore will greatly enhance the transverse current signal for each molecule. We implement first-principles transport calculations for DNA bases surveyed across a multilayered graphene nanopore system to illustrate the advantages of the proposed geometry. A time-series analysis of the cross-correlation functions illustrates the potential of this method for enhancing the signal-to-noise ratio. This work constitutes a significant step forward in facilitating fingerprinting of single biomolecules using solid state technology.

  18. Telecommunication Platforms for Transmitting Sensor Data over Communication Networks—State of the Art and Challenges

    PubMed Central

    Staniec, Kamil; Habrych, Marcin

    2016-01-01

    The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect—in particular—the operation of chemical sensors implemented in the network as they often require additional heating. PMID:27447633

  19. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  20. Facile preparation of high density polyethylene superhydrophobic/superoleophilic coatings on glass, copper and polyurethane sponge for self-cleaning, corrosion resistance and efficient oil/water separation.

    PubMed

    Cheng, Yuanyuan; Wu, Bei; Ma, Xiaofan; Lu, Shixiang; Xu, Wenguo; Szunerits, Sabine; Boukherroub, Rabah

    2018-04-18

    Inspired by the lotus effect and water-repellent properties of water striders' legs, superhydrophobic surfaces have been intensively investigated from both fundamental and applied perspectives for daily and industrial applications. Various techniques are available for the fabrication of artificial superoleophilic/superhydrophobic (SS). However, most of these techniques are tedious and often require hazardous or expensive equipment, which hampers their implementation for practical applications. In the present work, we used a versatile and straightforward technique based on polymer drop-casting for the preparation SS materials that can be implemented on any substrate. High density polyethylene (HDPE) SS coatings were prepared on different substrates (glass, copper mesh and polyurethane (PU) sponge) by drop casting the parent polymer xylene-ethanol solution at room temperature. All the substrates exhibited a superhydrophobic behavior with a water contact angle (WCA) greater than 150°. Furthermore, the corrosion resistance, stability, self-cleaning property, and water/oil separation of the developed materials were also assessed. While copper mesh and PU sponge exhibited good ability for oil and organic solvents separation from water, the HDPE-functionalized PU sponge displayed good adsorption capacity, 32-90 times the weight of adsorbed substance vs. the weight of adsorbent. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  2. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  3. Implementation of a new fuzzy vector control of induction motor.

    PubMed

    Rafa, Souad; Larabi, Abdelkader; Barazane, Linda; Manceur, Malik; Essounbouli, Najib; Hamzaoui, Abdelaziz

    2014-05-01

    The aim of this paper is to present a new approach to control an induction motor using type-1 fuzzy logic. The induction motor has a nonlinear model, uncertain and strongly coupled. The vector control technique, which is based on the inverse model of the induction motors, solves the coupling problem. Unfortunately, in practice this is not checked because of model uncertainties. Indeed, the presence of the uncertainties led us to use human expertise such as the fuzzy logic techniques. In order to maintain the decoupling and to overcome the problem of the sensitivity to the parametric variations, the field-oriented control is replaced by a new block control. The simulation results show that the both control schemes provide in their basic configuration, comparable performances regarding the decoupling. However, the fuzzy vector control provides the insensitivity to the parametric variations compared to the classical one. The fuzzy vector control scheme is successfully implemented in real-time using a digital signal processor board dSPACE 1104. The efficiency of this technique is verified as well as experimentally at different dynamic operating conditions such as sudden loads change, parameter variations, speed changes, etc. The fuzzy vector control is found to be a best control for application in an induction motor. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A comparison of image processing techniques for bird recognition.

    PubMed

    Nadimpalli, Uma D; Price, Randy R; Hall, Steven G; Bomma, Pallavi

    2006-01-01

    Bird predation is one of the major concerns for fish culture in open ponds. A novel method for dispersing birds is the use of autonomous vehicles. Image recognition software can improve their efficiency. Several image processing techniques for recognition of birds have been tested. A series of morphological operations were implemented. We divided images into 3 types, Type 1, Type 2, and Type 3, based on the level of difficulty of recognizing birds. Type 1 images were clear; Type 2 images were medium clear, and Type 3 images were unclear. Local thresholding has been implemented using HSV (Hue, Saturation, and Value), GRAY, and RGB (Red, Green, and Blue) color models on all three sections of images and results were tabulated. Template matching using normal correlation and artificial neural networks (ANN) are the other methods that have been developed in this study in addition to image morphology. Template matching produced satisfactory results irrespective of the difficulty level of images, but artificial neural networks produced accuracies of 100, 60, and 50% on Type 1, Type 2, and Type 3 images, respectively. Correct classification rate can be increased by further training. Future research will focus on testing the recognition algorithms in natural or aquacultural settings on autonomous boats. Applications of such techniques to industrial, agricultural, or related areas are additional future possibilities.

  5. Perturbational treatment of spin-orbit coupling for generally applicable high-level multi-reference methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Sebastian; Marquetand, Philipp; González, Leticia

    2014-08-21

    An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less

  6. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  7. Flow Control in Wells Turbines for Harnessing Maximum Wave Power.

    PubMed

    Lekube, Jon; Garrido, Aitor J; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-02-10

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness.

  8. Flow Control in Wells Turbines for Harnessing Maximum Wave Power

    PubMed Central

    Garrido, Aitor J.; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-01-01

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness. PMID:29439408

  9. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques

    PubMed Central

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-01-01

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower®, firstly guides the user to appropriately take an inflorescence photo using the smartphone’s camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower® has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application’s efficiency on four different devices covering a wide range of the market’s spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play. PMID:26343664

  10. EV Charging Through Wireless Power Transfer: Analysis of Efficiency Optimization and Technology Trends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, John M; Rakouth, Heri; Suh, In-Soo

    This paper is aimed at reviewing the technology trends for wireless power transfer (WPT) for electric vehicles (EV). It also analyzes the factors affecting its efficiency and describes the techniques currently used for its optimization. The review of the technology trends encompasses both stationary and moving vehicle charging systems. The study of the stationary vehicle charging technology is based on current implementations and on-going developments at WiTricity and Oak Ridge National Lab (ORNL). The moving vehicle charging technology is primarily described through the results achieved by the Korean Advanced Institute of Technology (KAIST) along with on-going efforts at Stanford University.more » The factors affecting the efficiency are determined through the analysis of the equivalent circuit of magnetic resonant coupling. The air gap between both transmitting and receiving coils along with the magnetic field distribution and the relative impedance mismatch between the related circuits are the primary factors affecting the WPT efficiency. Currently the industry is looking at an air gap of 25 cm or below. To control the magnetic field distribution, Kaist has recently developed the Shaped Magnetic Field In Resonance (SMFIR) technology that uses conveniently shaped ferrite material to provide low reluctance path. The efficiency can be further increased by means of impedance matching. As a result, Delphi's implementation of the WiTricity's technology exhibits a WPT efficiency above 90% for stationary charging while KAIST has demonstrated a maximum efficiency of 83% for moving vehicle with its On Line Vehicle (OLEV) project. This study is restricted to near-field applications (short and mid-range) and does not address long-range technology such as microwave power transfer that has low efficiency as it is based on radiating electromagnetic waves. This paper exemplifies Delphi's work in powertrain electrification as part of its innovation for the real world program geared toward a safer, greener and more connected driving. Moreover, it draws from and adds to Dr. Andrew Brown Jr.'s SAE books 'Active Safety and the Mobility Industry', 'Connectivity and Mobility Industry', and 'Green Technologies and the Mobility Industry'. Magnetic resonant coupling is the foundation of modern wireless power transfer. Its efficiency can be controlled through impedance matching and magnetic field shaping. Current implementations use one or both of these control methods and enable both stationary and mobile charging with typical efficiency within the 80% and 90% range for an air gap up to 25 cm.« less

  11. Alternative Fuels Data Center: Ten Ways You Can Implement Alternative Fuels

    Science.gov Websites

    and Energy-Efficient Vehicle Technologies Ten Ways You Can Implement Alternative Fuels and Energy-Efficient Vehicle Technologies to someone by E-mail Share Alternative Fuels Data Center: Ten Ways You Can Implement Alternative Fuels and Energy-Efficient Vehicle Technologies on Facebook Tweet about

  12. Drone Mission Definition and Implementation for Automated Infrastructure Inspection Using Airborne Sensors

    PubMed Central

    Besada, Juan A.; Bergesio, Luca; Campaña, Iván; Vaquero-Melchor, Diego; Bernardos, Ana M.; Casar, José R.

    2018-01-01

    This paper describes a Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones. The mission definition aims at improving planning efficiency with respect to state-of-the-art waypoint-based techniques, using high-level mission definition primitives and linking them with realistic flight models to simulate the inspection in advance. It also provides flight scripts and measurement plans which can be executed by commercial drones. Its user interfaces facilitate mission definition, pre-flight 3D synthetic mission visualisation and flight evaluation. Results are delivered for a set of representative infrastructure inspection flights, showing the accuracy of the flight prediction tools in actual operations using automated flight control. PMID:29641506

  13. Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits

    NASA Astrophysics Data System (ADS)

    Machnes, Shai; Assémat, Elie; Tannor, David; Wilhelm, Frank K.

    2018-04-01

    Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific constraints. Superconducting qubits present the additional requirement that pulses must have simple parameterizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system parameters. Other quantum technologies, such as sensing, require extremely high fidelities. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control technique named gradient optimization of analytic controls (GOAT), which satisfies all the above requirements, unlike previous approaches. To demonstrate GOAT's capabilities, with emphasis on flexibility and ease of subsequent calibration, we optimize fast coherence-limited pulses for two leading superconducting qubits architectures—flux-tunable transmons and fixed-frequency transmons with tunable couplers.

  14. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  15. Energy savings in Polish buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, L.C.; Gula, A.; Reeves, G.

    1995-12-31

    A demonstration of low-cost insulation and weatherization techniques was a part of phase 1 of the Krakow Clean Fossil Fuels and Energy Efficient Project. The objectives were to identify a cost-effective set of measures to reduce energy used for space heating, determine how much energy could be saved, and foster widespread implementation of those measures. The demonstration project focused on 4 11-story buildings in a Krakow housing cooperative. Energy savings of over 20% were obtained. Most important, the procedures and materials implemented in the demonstration project have been adapted to Polish conditions and applied to other housing cooperatives, schools, andmore » hospitals. Additional projects are being planned, in Krakow and other cities, under the direction of FEWE-Krakow, the Polish Energie Cities Network, and Biuro Rozwoju Krakowa.« less

  16. Systolic array processing of the sequential decoding algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  17. Research on application information system integration platform in medicine manufacturing enterprise.

    PubMed

    Deng, Wu; Zhao, Huimin; Zou, Li; Li, Yuanyuan; Li, Zhengguang

    2012-08-01

    Computer and information technology popularizes in the medicine manufacturing enterprise for its potentials in working efficiency and service quality. In allusion to the explosive data and information of application system in current medicine manufacturing enterprise, we desire to propose a novel application information system integration platform in medicine manufacturing enterprise, which based on a combination of RFID technology and SOA, to implement information sharing and alternation. This method exploits the application integration platform across service interface layer to invoke the RFID middleware. The loose coupling in integration solution is realized by Web services. The key techniques in RFID event components and expanded role-based security access mechanism are studied in detail. Finally, a case study is implemented and tested to evidence our understanding on application system integration platform in medicine manufacturing enterprise.

  18. Drone Mission Definition and Implementation for Automated Infrastructure Inspection Using Airborne Sensors.

    PubMed

    Besada, Juan A; Bergesio, Luca; Campaña, Iván; Vaquero-Melchor, Diego; López-Araquistain, Jaime; Bernardos, Ana M; Casar, José R

    2018-04-11

    This paper describes a Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones. The mission definition aims at improving planning efficiency with respect to state-of-the-art waypoint-based techniques, using high-level mission definition primitives and linking them with realistic flight models to simulate the inspection in advance. It also provides flight scripts and measurement plans which can be executed by commercial drones. Its user interfaces facilitate mission definition, pre-flight 3D synthetic mission visualisation and flight evaluation. Results are delivered for a set of representative infrastructure inspection flights, showing the accuracy of the flight prediction tools in actual operations using automated flight control.

  19. Lattice surgery on the Raussendorf lattice

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco

    2018-07-01

    Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.

  20. Economic barriers to implementation of innovations in health care: is the long run-short run efficiency discrepancy a paradox?

    PubMed

    Adang, Eddy M M; Wensing, Michel

    2008-12-01

    Favourable cost-effectiveness of innovative technologies is more and more a necessary condition for implementation in clinical practice. But proven cost-effectiveness itself does not guarantee successful implementation. The reason for this is a potential discrepancy between long run efficiency, on which cost-effectiveness is based, and short run efficiency. Long run and short run efficiency is dependent upon economies of scale. This paper addresses the potential discrepancy between long run and short run efficiency of innovative technologies in healthcare, explores diseconomies of scale in Dutch hospitals and suggests what strategies might help to overcome hurdles to implement innovations due to that discrepancy.

Top