Electroluminescence Efficiency Enhancement using Metal Nanoparticles
2008-06-22
ABSTRACT We apply the “effective mode volume” theory to evaluate enhancement of the electroluminescence efficiency of semiconductor emitters placed in... Electroluminescence efficiency enhancement using metal nanoparticles J. B. Khurgin,1 G. Sun,2,a and R. A. Soref3 1Department of Electrical and Computer...published online 17 July 2008 We apply the “effective mode volume” theory to evaluate enhancement of the electroluminescence efficiency of semiconductor
Land classification of south-central Iowa from computer enhanced images
NASA Technical Reports Server (NTRS)
Lucas, J. R. (Principal Investigator); Taranik, J. V.; Billingsley, F. C.
1976-01-01
The author has identified the following significant results. The Iowa Geological Survey developed its own capability for producing color products from digitally enhanced LANDSAT data. Research showed that efficient production of enhanced images required full utilization of both computer and photographic enhancement procedures. The 29 August 1972 photo-optically enhanced color composite was more easily interpreted for land classification purposes than standard color composites.
78 FR 39617 - Data Practices, Computer III Further Remand: BOC Provision of Enhanced Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... Docket No. 10-132; FCC 13-69] Data Practices, Computer III Further Remand: BOC Provision of Enhanced... eliminates comparably efficient interconnection (CEI) and open network architecture (ONA) narrowband... disseminates data, including by altering or eliminating collections that are no longer useful or necessary to...
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Saglam, Ali S; Chong, Lillian T
2016-01-14
An essential baseline for determining the extent to which electrostatic interactions enhance the kinetics of protein-protein association is the "basal" kon, which is the rate constant for association in the absence of electrostatic interactions. However, since such association events are beyond the milliseconds time scale, it has not been practical to compute the basal kon by directly simulating the association with flexible models. Here, we computed the basal kon for barnase and barstar, two of the most rapidly associating proteins, using highly efficient, flexible molecular simulations. These simulations involved (a) pseudoatomic protein models that reproduce the molecular shapes, electrostatic, and diffusion properties of all-atom models, and (b) application of the weighted ensemble path sampling strategy, which enhanced the efficiency of generating association events by >130-fold. We also examined the extent to which the computed basal kon is affected by inclusion of intermolecular hydrodynamic interactions in the simulations.
Beyond Scale-Free Small-World Networks: Cortical Columns for Quick Brains
NASA Astrophysics Data System (ADS)
Stoop, Ralph; Saase, Victor; Wagner, Clemens; Stoop, Britta; Stoop, Ruedi
2013-03-01
We study to what extent cortical columns with their particular wiring boost neural computation. Upon a vast survey of columnar networks performing various real-world cognitive tasks, we detect no signs of enhancement. It is on a mesoscopic—intercolumnar—scale that the existence of columns, largely irrespective of their inner organization, enhances the speed of information transfer and minimizes the total wiring length required to bind distributed columnar computations towards spatiotemporally coherent results. We suggest that brain efficiency may be related to a doubly fractal connectivity law, resulting in networks with efficiency properties beyond those by scale-free networks.
Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.
ERIC Educational Resources Information Center
Van Nelson, C.; Henriksen, Larry W.
The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…
Additional development of the XTRAN3S computer program
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
ERIC Educational Resources Information Center
Chien, Tien-Chen
2008-01-01
Computer is not only a powerful technology for managing information and enhancing productivity, but also an efficient tool for education and training. Computer anxiety can be one of the major problems that affect the effectiveness of learning. Through analyzing related literature, this study describes the phenomenon of computer anxiety,…
Formation of embedded plasmonic Ga nanoparticle arrays and their influence on GaAs photoluminescence
NASA Astrophysics Data System (ADS)
Kang, M.; Jeon, S.; Jen, T.; Lee, J.-E.; Sih, V.; Goldman, R. S.
2017-07-01
We introduce a novel approach to the seamless integration of plasmonic nanoparticle (NP) arrays into semiconductor layers and demonstrate their enhanced photoluminescence (PL) efficiency. Our approach utilizes focused ion beam-induced self-assembly of close-packed arrays of Ga NPs with tailorable NP diameters, followed by overgrowth of GaAs layers using molecular beam epitaxy. Using a combination of PL spectroscopy and electromagnetic computations, we identify a regime of Ga NP diameter and overgrown GaAs layer thickness where NP-array-enhanced absorption in GaAs leads to enhanced GaAs near-band-edge (NBE) PL efficiency, surpassing that of high-quality epitaxial GaAs layers. As the NP array depth and size are increased, the reduction in spontaneous emission rate overwhelms the NP-array-enhanced absorption, leading to a reduced NBE PL efficiency. This approach provides an opportunity to enhance the PL efficiency of a wide variety of semiconductor heterostructures.
Progress and challenges in bioinformatics approaches for enhancer identification
Kleftogiannis, Dimitrios; Kalnis, Panos
2016-01-01
Enhancers are cis-acting DNA elements that play critical roles in distal regulation of gene expression. Identifying enhancers is an important step for understanding distinct gene expression programs that may reflect normal and pathogenic cellular conditions. Experimental identification of enhancers is constrained by the set of conditions used in the experiment. This requires multiple experiments to identify enhancers, as they can be active under specific cellular conditions but not in different cell types/tissues or cellular states. This has opened prospects for computational prediction methods that can be used for high-throughput identification of putative enhancers to complement experimental approaches. Potential functions and properties of predicted enhancers have been catalogued and summarized in several enhancer-oriented databases. Because the current methods for the computational prediction of enhancers produce significantly different enhancer predictions, it will be beneficial for the research community to have an overview of the strategies and solutions developed in this field. In this review, we focus on the identification and analysis of enhancers by bioinformatics approaches. First, we describe a general framework for computational identification of enhancers, present relevant data types and discuss possible computational solutions. Next, we cover over 30 existing computational enhancer identification methods that were developed since 2000. Our review highlights advantages, limitations and potentials, while suggesting pragmatic guidelines for development of more efficient computational enhancer prediction methods. Finally, we discuss challenges and open problems of this topic, which require further consideration. PMID:26634919
ERIC Educational Resources Information Center
Ishola, Bashiru Abayomi
2017-01-01
Cloud computing has recently emerged as a potential alternative to the traditional on-premise computing that businesses can leverage to achieve operational efficiencies. Consequently, technology managers are often tasked with the responsibilities to analyze the barriers and variables critical to organizational cloud adoption decisions. This…
Semi-autonomous parking for enhanced safety and efficiency.
DOT National Transportation Integrated Search
2017-06-01
This project focuses on the use of tools from a combination of computer vision and localization based navigation schemes to aid the process of efficient and safe parking of vehicles in high density parking spaces. The principles of collision avoidanc...
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C
1993-05-10
An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.
PP-SWAT: A phython-based computing software for efficient multiobjective callibration of SWAT
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Computer Assisted Communication within the Classroom: Interactive Lecturing.
ERIC Educational Resources Information Center
Herr, Richard B.
At the University of Delaware student-teacher communication within the classroom was enhanced through the implementation of a versatile, yet cost efficient, application of computer technology. A single microcomputer at a teacher's station controls a network of student keypad/display stations to provide individual channels of continuous…
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
Computer-Supported Instruction in Enhancing the Performance of Dyscalculics
ERIC Educational Resources Information Center
Kumar, S. Praveen; Raja, B. William Dharma
2010-01-01
The use of instructional media is an essential component of teaching-learning process which contributes to the efficiency as well as effectiveness of the teaching-learning process. Computer-supported instruction has a very important role to play as an advanced technological instruction as it employs different instructional techniques like…
Guo, Kai; Zhang, Yong-Liang; Qian, Cheng; Fung, Kin-Hung
2018-04-30
In this work, we demonstrate computationally that electric dipole-quadrupole hybridization (EDQH) could be utilized to enhance plasmonic SHG efficiency. To this end, we construct T-shaped plasmonic heterodimers consisting of a short and a long gold nanorod with finite element method simulation. By controlling the strength of capacitive coupling between two gold nanorods, we explore the effect of EDQH evolution on the SHG process, including the SHG efficiency enhancement, corresponding near-field distribution, and far-field radiation pattern. Simulation results demonstrate that EDQH could enhance the SHG efficiency by a factor >100 in comparison with that achieved by an isolated gold nanorod. Additionally, the far-field pattern of the SHG could be adjusted beyond the well-known quadrupolar distribution and confirms that EDQH plays an important role in the SHG process.
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-12-31
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-01-01
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
Sedlár, Drahomír; Potomková, Jarmila; Rehorová, Jarmila; Seckár, Pavel; Sukopová, Vera
2003-11-01
Information explosion and globalization make great demands on keeping pace with the new trends in the healthcare sector. The contemporary level of computer and information literacy among most health care professionals in the Teaching Hospital Olomouc (Czech Republic) is not satisfactory for efficient exploitation of modern information technology in diagnostics, therapy and nursing. The present contribution describes the application of two basic problem solving techniques (brainstorming, SWOT analysis) to develop a project aimed at information literacy enhancement.
NASA Technical Reports Server (NTRS)
Dunbar, P. M.; Hauser, J. R.
1976-01-01
Various mechanisms which limit the conversion efficiency of silicon solar cells were studied. The effects of changes in solar cell geometry such as layer thickness on performance were examined. The effects of various antireflecting layers were also examined. It was found that any single film antireflecting layer results in a significant surface loss of photons. The use of surface texturing techniques or low loss antireflecting layers can enhance by several percentage points the conversion efficiency of silicon cells. The basic differences between n(+)-p-p(+) and p(+)-n-n(+) cells are treated. A significant part of the study was devoted to the importance of surface region lifetime and heavy doping effects on efficiency. Heavy doping bandgap reduction effects are enhanced by low surface layer lifetimes, and conversely, the reduction in solar cell efficiency due to low surface layer lifetime is further enhanced by heavy doping effects. A series of computer studies is reported which seeks to determine the best cell structure and doping levels for maximum efficiency.
Enhancing battery efficiency for pervasive health-monitoring systems based on electronic textiles.
Zheng, Nenggan; Wu, Zhaohui; Lin, Man; Yang, Laurence Tianruo
2010-03-01
Electronic textiles are regarded as one of the most important computation platforms for future computer-assisted health-monitoring applications. In these novel systems, multiple batteries are used in order to prolong their operational lifetime, which is a significant metric for system usability. However, due to the nonlinear features of batteries, computing systems with multiple batteries cannot achieve the same battery efficiency as those powered by a monolithic battery of equal capacity. In this paper, we propose an algorithm aiming to maximize battery efficiency globally for the computer-assisted health-care systems with multiple batteries. Based on an accurate analytical battery model, the concept of weighted battery fatigue degree is introduced and the novel battery-scheduling algorithm called predicted weighted fatigue degree least first (PWFDLF) is developed. Besides, we also discuss our attempts during search PWFDLF: a weighted round-robin (WRR) and a greedy algorithm achieving highest local battery efficiency, which reduces to the sequential discharging policy. Evaluation results show that a considerable improvement in battery efficiency can be obtained by PWFDLF under various battery configurations and current profiles compared to conventional sequential and WRR discharging policies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, Reed B.; Ding, Feng; Ye, Dongmei
Organophosphates are widely used for peaceful (agriculture) and military purposes (chemical warfare agents). The extraordinary toxicity of organophosphates and the risk of deployment, make it critical to develop means for their rapid and efficient deactivation. Organophosphate hydrolase (OPH) already plays an important role in organophosphate remediation, but is insufficient for therapeutic or prophylactic purposes primarily due to low substrate affinity. Current efforts focus on directly modifying the active site to differentiate substrate specificity and increase catalytic activity. Here, we present a novel strategy for enhancing the general catalytic efficiency of OPH through computational redesign of the residues that are allostericallymore » coupled to the active site and validated our design by mutagenesis. Specifically, we identify five such hot-spot residues for allosteric regulation and assay these mutants for hydrolysis activity against paraoxon, a chemical-weapons simulant. A high percentage of the predicted mutants exhibit enhanced activity over wild-type (k cat =16.63 s -1), such as T199I/T54I (899.5 s -1) and C227V/T199I/T54I (848 s -1), while the Km remains relatively unchanged in our high-throughput cell-free expression system. Further computational studies of protein dynamics reveal four distinct distal regions coupled to the active site that display significant changes in conformation dynamics upon these identified mutations. These results validate a computational design method that is both efficient and easily adapted as a general procedure for enzymatic enhancement.« less
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
Khashan, S. A.; Alazzam, A.; Furlani, E. P.
2014-01-01
A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437
Remote sensing of land-based voids using computer enhanced infrared thermography
NASA Astrophysics Data System (ADS)
Weil, Gary J.
1989-10-01
Experiments are described in which computer-enhanced infrared thermography techniques are used to detect and describe subsurface land-based voids, such as voids surrounding buried utility pipes, voids in concrete structures such as airport taxiways, abandoned buried utility storage tanks, and caves and underground shelters. Infrared thermography also helps to evaluate bridge deck systems, highway pavements, and garage concrete. The IR thermography techniques make it possible to survey large areas quickly and efficiently. The paper also surveys the advantages and limitations of thermographic testing in comparison with other forms of NDT.
A Study on the Methods of Assessment and Strategy of Knowledge Sharing in Computer Course
ERIC Educational Resources Information Center
Chan, Pat P. W.
2014-01-01
With the advancement of information and communication technology, collaboration and knowledge sharing through technology is facilitated which enhances the learning process and improves the learning efficiency. The purpose of this paper is to review the methods of assessment and strategy of collaboration and knowledge sharing in a computer course,…
ERIC Educational Resources Information Center
Schoppek, Wolfgang; Tulis, Maria
2010-01-01
The fluency of basic arithmetical operations is a precondition for mathematical problem solving. However, the training of skills plays a minor role in contemporary mathematics instruction. The authors proposed individualization of practice as a means to improve its efficiency, so that the time spent with the training of skills is minimized. As a…
Recursive Newton-Euler formulation of manipulator dynamics
NASA Technical Reports Server (NTRS)
Nasser, M. G.
1989-01-01
A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Change Detection of Mobile LIDAR Data Using Cloud Computing
NASA Astrophysics Data System (ADS)
Liu, Kun; Boehm, Jan; Alis, Christian
2016-06-01
Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.
Enhanced electrocaloric cooling in ferroelectric single crystals by electric field reversal
NASA Astrophysics Data System (ADS)
Ma, Yang-Bin; Novak, Nikola; Koruza, Jurij; Yang, Tongqing; Albe, Karsten; Xu, Bai-Xiang
2016-09-01
An improved thermodynamic cycle is validated in ferroelectric single crystals, where the cooling effect of an electrocaloric refrigerant is enhanced by applying a reversed electric field. In contrast to the conventional adiabatic heating or cooling by on-off cycles of the external electric field, applying a reversed field is significantly improving the cooling efficiency, since the variation in configurational entropy is increased. By comparing results from computer simulations using Monte Carlo algorithms and experiments using direct electrocaloric measurements, we show that the electrocaloric cooling efficiency can be enhanced by more than 20% in standard ferroelectrics and also relaxor ferroelectrics, like Pb (Mg1 /3 /Nb2 /3)0.71Ti0.29O3 .
Project : semi-autonomous parking for enhanced safety and efficiency.
DOT National Transportation Integrated Search
2016-04-01
Index coding, a coding formulation traditionally analyzed in the theoretical computer science and : information theory communities, has received considerable attention in recent years due to its value in : wireless communications and networking probl...
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Solving search problems by strongly simulating quantum circuits
Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.
2013-01-01
Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585
Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan
2013-03-01
In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
A Human Factors Framework for Payload Display Design
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Hutchinson, Sonya L.
1998-01-01
During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.
NASA Astrophysics Data System (ADS)
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-01
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme
NASA Technical Reports Server (NTRS)
Vatsa, V. N.; Wedan, B. W.
1988-01-01
A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.
Using computer graphics to enhance astronaut and systems safety
NASA Technical Reports Server (NTRS)
Brown, J. W.
1985-01-01
Computer graphics is being employed at the NASA Johnson Space Center as a tool to perform rapid, efficient and economical analyses for man-machine integration, flight operations development and systems engineering. The Operator Station Design System (OSDS), a computer-based facility featuring a highly flexible and versatile interactive software package, PLAID, is described. This unique evaluation tool, with its expanding data base of Space Shuttle elements, various payloads, experiments, crew equipment and man models, supports a multitude of technical evaluations, including spacecraft and workstation layout, definition of astronaut visual access, flight techniques development, cargo integration and crew training. As OSDS is being applied to the Space Shuttle, Orbiter payloads (including the European Space Agency's Spacelab) and future space vehicles and stations, astronaut and systems safety are being enhanced. Typical OSDS examples are presented. By performing physical and operational evaluations during early conceptual phases. supporting systems verification for flight readiness, and applying its capabilities to real-time mission support, the OSDS provides the wherewithal to satisfy a growing need of the current and future space programs for efficient, economical analyses.
An enhanced mobile-healthcare emergency system based on extended chaotic maps.
Lee, Cheng-Chi; Hsu, Che-Wei; Lai, Yan-Ming; Vasilakos, Athanasios
2013-10-01
Mobile Healthcare (m-Healthcare) systems, namely smartphone applications of pervasive computing that utilize wireless body sensor networks (BSNs), have recently been proposed to provide smartphone users with health monitoring services and received great attentions. An m-Healthcare system with flaws, however, may leak out the smartphone user's personal information and cause security, privacy preservation, or user anonymity problems. In 2012, Lu et al. proposed a secure and privacy-preserving opportunistic computing (SPOC) framework for mobile-Healthcare emergency. The brilliant SPOC framework can opportunistically gather resources on the smartphone such as computing power and energy to process the computing-intensive personal health information (PHI) in case of an m-Healthcare emergency with minimal privacy disclosure. To balance between the hazard of PHI privacy disclosure and the necessity of PHI processing and transmission in m-Healthcare emergency, in their SPOC framework, Lu et al. introduced an efficient user-centric privacy access control system which they built on the basis of an attribute-based access control mechanism and a new privacy-preserving scalar product computation (PPSPC) technique. However, we found out that Lu et al.'s protocol still has some secure flaws such as user anonymity and mutual authentication. To fix those problems and further enhance the computation efficiency of Lu et al.'s protocol, in this article, the authors will present an improved mobile-Healthcare emergency system based on extended chaotic maps. The new system is capable of not only providing flawless user anonymity and mutual authentication but also reducing the computation cost.
Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.
2008-07-30
As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less
NASA Astrophysics Data System (ADS)
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems
Wu, Jun; Su, Zhou; Li, Jianhua
2017-01-01
Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943
Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.
Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua
2017-07-30
Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.
IEEE TRANSACTIONS ON CYBERNETICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig R. RIeger; David H. Scheidt; William D. Smart
2014-11-01
MODERN societies depend on complex and critical infrastructures for energy, transportation, sustenance, medical care, emergency response, communications security. As computers, automation, and information technology (IT) have advanced, these technologies have been exploited to enhance the efficiency of operating the processes that make up these infrastructures
Supplementary Computer Generated Cueing to Enhance Air Traffic Controller Efficiency
2013-03-01
assess the complexity of air traffic control (Mogford, Guttman, Morrow, & Kopardekar, 1995; Laudeman, Shelden, Branstrom, & Brasil , 1998). Controllers...Behaviorial Sciences: Volume 1: Methodological Issues Volume 2: Statistical Issues, 1, 257. Laudeman, I. V., Shelden, S. G., Branstrom, R., & Brasil
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao; Sternberg, Philip; Maris, Pieter
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI codemore » MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.« less
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
Computational Study of Fluidic Thrust Vectoring using Separation Control in a Nozzle
NASA Technical Reports Server (NTRS)
Deere, Karen; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.
2003-01-01
A computational investigation of a two- dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. The structured-grid, computational fluid dynamics code PAB3D was used to guide the design and analyze over 60 configurations. Nozzle design variables included cavity convergence angle, cavity length, fluidic injection angle, upstream minimum height, aft deck angle, and aft deck shape. All simulations were computed with a static freestream Mach number of 0.05. a nozzle pressure ratio of 3.858, and a fluidic injection flow rate equal to 6 percent of the primary flow rate. Results indicate that the recessed cavity enhances the throat shifting method of fluidic thrust vectoring and allows for greater thrust-vector angles without compromising thrust efficiency.
ERIC Educational Resources Information Center
Becker, David S.; Pyrce, Sharon R.
The goal of this project was to find ways of enhancing the efficiency of searching machine readable data bases. Ways are sought to transfer to the computer some of the tasks that are normally performed by the user, i.e., to further automate information retrieval. Four experiments were conducted to test the feasibility of a sequential processing…
Yang, Mingjun; Huang, Jing; MacKerell, Alexander D
2015-06-09
Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.
NASA Astrophysics Data System (ADS)
Kaganskiy, Arsenty; Fischbach, Sarah; Strittmatter, André; Rodt, Sven; Heindel, Tobias; Reitzenstein, Stephan
2018-04-01
We report on the realization of scalable single-photon sources (SPSs) based on single site-controlled quantum dots (SCQDs) and deterministically fabricated microlenses. The fabrication process comprises the buried-stressor growth technique complemented with low-temperature in-situ electron-beam lithography for the integration of SCQDs into microlens structures with high yield and high alignment accuracy. The microlens-approach leads to a broadband enhancement of the photon-extraction efficiency of up to (21 ± 2)% and a high suppression of multi-photon events with g (2)(τ = 0) < 0.06 without background subtraction. The demonstrated combination of site-controlled growth of QDs and in-situ electron-beam lithography is relevant for arrays of efficient SPSs which, can be applied in photonic quantum circuits and advanced quantum computation schemes.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
GAPIT version 2: an enhanced integrated tool for genomic association and prediction
USDA-ARS?s Scientific Manuscript database
Most human diseases and agriculturally important traits are complex. Dissecting their genetic architecture requires continued development of innovative and powerful statistical methods. Corresponding advances in computing tools are critical to efficiently use these statistical innovations and to enh...
Technology Enhanced Teacher Evaluation
ERIC Educational Resources Information Center
Teter, Richard B.
2010-01-01
The purpose of this research and development study was to design and develop an affordable, computer-based, pre-service teacher assessment and reporting system to allow teacher education institutions and supervising teachers to efficiently enter evaluation criteria, record pre-service teacher evaluations, and generate evaluation reports. The…
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu
2006-04-17
TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Korolkov, Victor P; Nasyrov, Ruslan K; Shimansky, Ruslan V
2006-01-01
Enhancing the diffraction efficiency of continuous-relief diffractive optical elements fabricated by direct laser writing is discussed. A new method of zone-boundary optimization is proposed to correct exposure data only in narrow areas along the boundaries of diffractive zones. The optimization decreases the loss of diffraction efficiency related to convolution of a desired phase profile with a writing-beam intensity distribution. A simplified stepped transition function that describes optimized exposure data near zone boundaries can be made universal for a wide range of zone periods. The approach permits a similar increase in the diffraction efficiency as an individual-pixel optimization but with fewer computation efforts. Computer simulations demonstrated that the zone-boundary optimization for a 6 microm period grating increases the efficiency by 7% and 14.5% for 0.6 microm and 1.65 microm writing-spot diameters, respectively. The diffraction efficiency of as much as 65%-90% for 4-10 microm zone periods was obtained experimentally with this method.
Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.
Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A
2015-10-26
Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mishra, Dheerendra; Mukhopadhyay, Sourav; Kumari, Saru; Khan, Muhammad Khurram; Chaturvedi, Ankita
2014-05-01
Telecare medicine information systems (TMIS) present the platform to deliver clinical service door to door. The technological advances in mobile computing are enhancing the quality of healthcare and a user can access these services using its mobile device. However, user and Telecare system communicate via public channels in these online services which increase the security risk. Therefore, it is required to ensure that only authorized user is accessing the system and user is interacting with the correct system. The mutual authentication provides the way to achieve this. Although existing schemes are either vulnerable to attacks or they have higher computational cost while an scalable authentication scheme for mobile devices should be secure and efficient. Recently, Awasthi and Srivastava presented a biometric based authentication scheme for TMIS with nonce. Their scheme only requires the computation of the hash and XOR functions.pagebreak Thus, this scheme fits for TMIS. However, we observe that Awasthi and Srivastava's scheme does not achieve efficient password change phase. Moreover, their scheme does not resist off-line password guessing attack. Further, we propose an improvement of Awasthi and Srivastava's scheme with the aim to remove the drawbacks of their scheme.
Space shuttle main engine numerical modeling code modifications and analysis
NASA Technical Reports Server (NTRS)
Ziebarth, John P.
1988-01-01
The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).
Valuing uncertain cash flows from investments that enhance energy efficiency.
Abadie, Luis M; Chamorro, José M; González-Eguino, Mikel
2013-02-15
There is a broad consensus that investments to enhance energy efficiency quickly pay for themselves in lower energy bills and spared emission allowances. However, investments that at first glance seem worthwhile usually are not undertaken. One of the plausible, non-excluding explanations is the numerous uncertainties that these investments face. This paper deals with the optimal time to invest in an energy efficiency enhancement at a facility already in place that consumes huge amounts of a fossil fuel (coal) and operates under carbon constraints. We follow the Real Options approach. Our model comprises three sources of uncertainty following different stochastic processes which allows for application in a broad range of settings. We assess the investment option by means of a three-dimensional binomial lattice. We compute the trigger investment cost, i.e., the threshold level below which immediate investment would be optimal. We analyze the major drivers of this decision thus aiming at the most promising policies in this regard. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Manoonpong, Poramate; Petersen, Dennis; Kovalev, Alexander; Wörgötter, Florentin; Gorb, Stanislav N.; Spinner, Marlene; Heepe, Lars
2016-12-01
Based on the principles of morphological computation, we propose a novel approach that exploits the interaction between a passive anisotropic scale-like material (e.g., shark skin) and a non-smooth substrate to enhance locomotion efficiency of a robot walking on inclines. Real robot experiments show that passive tribologically-enhanced surfaces of the robot belly or foot allow the robot to grip on specific surfaces and move effectively with reduced energy consumption. Supplementing the robot experiments, we investigated tribological properties of the shark skin as well as its mechanical stability. It shows high frictional anisotropy due to an array of sloped denticles. The orientation of the denticles to the underlying collagenous material also strongly influences their mechanical interlocking with the substrate. This study not only opens up a new way of achieving energy-efficient legged robot locomotion but also provides a better understanding of the functionalities and mechanical properties of anisotropic surfaces. That understanding will assist developing new types of material for other real-world applications.
Manoonpong, Poramate; Petersen, Dennis; Kovalev, Alexander; Wörgötter, Florentin; Gorb, Stanislav N.; Spinner, Marlene; Heepe, Lars
2016-01-01
Based on the principles of morphological computation, we propose a novel approach that exploits the interaction between a passive anisotropic scale-like material (e.g., shark skin) and a non-smooth substrate to enhance locomotion efficiency of a robot walking on inclines. Real robot experiments show that passive tribologically-enhanced surfaces of the robot belly or foot allow the robot to grip on specific surfaces and move effectively with reduced energy consumption. Supplementing the robot experiments, we investigated tribological properties of the shark skin as well as its mechanical stability. It shows high frictional anisotropy due to an array of sloped denticles. The orientation of the denticles to the underlying collagenous material also strongly influences their mechanical interlocking with the substrate. This study not only opens up a new way of achieving energy-efficient legged robot locomotion but also provides a better understanding of the functionalities and mechanical properties of anisotropic surfaces. That understanding will assist developing new types of material for other real-world applications. PMID:28008936
High Power Klystrons for Efficient Reliable High Power Amplifiers.
1980-11-01
techniques to obtain high overall efficiency. One is second harmonic space charge bunching. This is a process whereby the fundamental and second harmonic...components of the space charge waves in the electron beam of a microwave tube are combined to produce more highly concentrated electron bunches raising the...the drift lengths to enhance the 2nd harmonic component in the space charge waves. The latter method was utilized in the VKC-7790. Computer
Enhancing the efficiency of polymerase chain reaction using graphene nanoflakes.
Abdul Khaliq, R; Kafafy, Raed; Salleh, Hamzah Mohd; Faris, Waleed Fekry
2012-11-16
The effect of the recently developed graphene nanoflakes (GNFs) on the polymerase chain reaction (PCR) has been investigated in this paper. The rationale behind the use of GNFs is their unique physical and thermal properties. Experiments show that GNFs can enhance the thermal conductivity of base fluids and results also revealed that GNFs are a potential enhancer of PCR efficiency; moreover, the PCR enhancements are strongly dependent on GNF concentration. It was found that GNFs yield DNA product equivalent to positive control with up to 65% reduction in the PCR cycles. It was also observed that the PCR yield is dependent on the GNF size, wherein the surface area increases and augments thermal conductivity. Computational fluid dynamics (CFD) simulations were performed to analyze the heat transfer through the PCR tube model in the presence and absence of GNFs. The results suggest that the superior thermal conductivity effect of GNFs may be the main cause of the PCR enhancement.
Accounting for receptor flexibility and enhanced sampling methods in computer-aided drug design.
Sinko, William; Lindert, Steffen; McCammon, J Andrew
2013-01-01
Protein flexibility plays a major role in biomolecular recognition. In many cases, it is not obvious how molecular structure will change upon association with other molecules. In proteins, these changes can be major, with large deviations in overall backbone structure, or they can be more subtle as in a side-chain rotation. Either way the algorithms that predict the favorability of biomolecular association require relatively accurate predictions of the bound structure to give an accurate assessment of the energy involved in association. Here, we review a number of techniques that have been proposed to accommodate receptor flexibility in the simulation of small molecules binding to protein receptors. We investigate modifications to standard rigid receptor docking algorithms and also explore enhanced sampling techniques, and the combination of free energy calculations and enhanced sampling techniques. The understanding and allowance for receptor flexibility are helping to make computer simulations of ligand protein binding more accurate. These developments may help improve the efficiency of drug discovery and development. Efficiency will be essential as we begin to see personalized medicine tailored to individual patients, which means specific drugs are needed for each patient's genetic makeup. © 2012 John Wiley & Sons A/S.
Optimized blind gamma-ray pulsar searches at fixed computing budget
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de
The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
Enhancing Privacy in Participatory Sensing Applications with Multidimensional Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, Stephanie; He, Wenbo; Groat, Michael
2013-01-01
Participatory sensing applications rely on individuals to share personal data to produce aggregated models and knowledge. In this setting, privacy concerns can discourage widespread adoption of new applications. We present a privacy-preserving participatory sensing scheme based on negative surveys for both continuous and multivariate categorical data. Without relying on encryption, our algorithms enhance the privacy of sensed data in an energy and computation efficient manner. Simulations and implementation on Android smart phones illustrate how multidimensional data can be aggregated in a useful and privacy-enhancing manner.
Aerothermal modeling program, phase 2. Element B: Flow interaction experiment
NASA Technical Reports Server (NTRS)
Nikjooy, M.; Mongia, H. C.; Murthy, S. N. B.; Sullivan, J. P.
1986-01-01
The design process was improved and the efficiency, life, and maintenance costs of the turbine engine hot section was enhanced. Recently, there has been much emphasis on the need for improved numerical codes for the design of efficient combustors. For the development of improved computational codes, there is a need for an experimentally obtained data base to be used at test cases for the accuracy of the computations. The purpose of Element-B is to establish a benchmark quality velocity and scalar measurements of the flow interaction of circular jets with swirling flow typical of that in the dome region of annular combustor. In addition to the detailed experimental effort, extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current and advanced turbulence and scalar transport models.
Enhancement of the CAVE computer code
NASA Astrophysics Data System (ADS)
Rathjen, K. A.; Burk, H. O.
1983-12-01
The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.
SCUT: clinical data organization for physicians using pen computers.
Wormuth, D. W.
1992-01-01
The role of computers in assisting physicians with patient care is rapidly advancing. One of the significant obstacles to efficient use of computers in patient care has been the unavailability of reasonably configured portable computers. Lightweight portable computers are becoming more attractive as physician data-management devices, but still pose a significant problem with bedside use. The advent of computers designed to accept input from a pen and having no keyboard present a usable computer platform to enable physicians to perform clinical computing at the bedside. This paper describes a prototype system to maintain an electronic "scut" sheet. SCUT makes use of pen-input and background rule checking to enhance patient care. GO Corporation's PenPoint Operating System is used to implement the SCUT project. PMID:1483012
Blanson Henkemans, O. A.; Rogers, W. A.; Fisk, A. D.; Neerincx, M. A.; Lindenberg, J.; van der Mast, C. A. P. G.
2014-01-01
Summary Objectives We developed an adaptive computer assistant for the supervision of diabetics’ self-care, to support limiting illness and need for acute treatment, and improve health literacy. This assistant monitors self-care activities logged in the patient’s electronic diary. Accordingly, it provides context-aware feedback. The objective was to evaluate whether older adults in general can make use of the computer assistant and to compare an adaptive computer assistant with a fixed one, concerning its usability and contribution to health literacy. Methods We conducted a laboratory experiment in the Georgia Tech Aware Home wherein 28 older adults participated in a usability evaluation of the computer assistant, while engaged in scenarios reflecting normal and health-critical situations. We evaluated the assistant on effectiveness, efficiency, satisfaction, and educational value. Finally, we studied the moderating effects of the subjects’ personal characteristics. Results Logging self-care tasks and receiving feedback from the computer assistant enhanced the subjects’ knowledge of diabetes. The adaptive assistant was more effective in dealing with normal and health-critical situations, and, generally, it led to more time efficiency. Subjects’ personal characteristics had substantial effects on the effectiveness and efficiency of the two computer assistants. Conclusions Older adults were able to use the adaptive computer assistant. In addition, it had a positive effect on the development of health literacy. The assistant has the potential to support older diabetics’ self care while maintaining quality of life. PMID:18213433
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Characteristics of a semi-custom library development system
NASA Technical Reports Server (NTRS)
Yancey, M.; Cannon, R.
1990-01-01
Standard cell and gate array macro libraries are in common use with workstation computer aided design (CAD) tools for application specific integrated circuit (ASIC) semi-custom application and have resulted in significant improvements in the overall design efficiencies as contrasted with custom design methodologies. Similar design methodology enhancements in providing for the efficient development of the library cells is an important factor in responding to the need for continuous technology improvement. The characteristics of a library development system that provides design flexibility and productivity enhancements for the library development engineer as he provides libraries in the state-of-the-art process technologies are presented. An overview of Gould's library development system ('Accolade') is also presented.
NASA Astrophysics Data System (ADS)
Divya, A.; Mathavan, T.; Asath, R. Mohamed; Archana, J.; Hayakawa, Y.; Benial, A. Milton Franklin
2016-05-01
A series of strontium oxide functionalized graphene nanoflakes were designed and their optoelectronic properties were studied for enhanced photocatalytic activity. The efficiency of designed molecules was studied using various parameters such as HOMO-LUMO energy gap, light harvesting efficiency and exciton binding energy. The computed results show that by increasing the degree of functionalization of strontium oxide leads to lowering the band gap of hydrogen terminated graphene nanoflakes. Furthermore, the study explores the role of strontium oxide functionalization in Frontier Molecular Orbitals, ionization potential, electron affinity, exciton binding energy and light harvesting efficiency of designed molecules. The infrared and Raman spectra were simulated for pure and SrO functionalized graphene nanoflakes. The electron rich and electron deficient regions which are favorable for electrophilic and nucleophilic attacks respectively were analyzed using molecular electrostatic potential surface analysis.
NASA Technical Reports Server (NTRS)
Gordon, H. R.
1979-01-01
The radiative transfer equation is modified to include the effect of fluorescent substances and solved in the quasi-single scattering approximation for a homogeneous ocean containing fluorescent particles with wavelength independent quantum efficiency and a Gaussian shaped emission line. The results are applied to the in vivo fluorescence of chlorophyll a (in phytoplankton) in the ocean to determine if the observed quantum efficiencies are large enough to explain the enhancement of the ocean's diffuse reflectance near 685 nm in chlorophyll rich waters without resorting to anomalous dispersion. The computations indicate that the required efficiencies are sufficiently low to account completely for the enhanced reflectance. The validity of the theory is further demonstrated by deriving values for the upwelling irradiance attenuation coefficient at 685 nm which are in close agreement with the observations.
Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William
2013-04-30
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
A Computational Study of a New Dual Throat Fluidic Thrust Vectoring Nozzle Concept
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.
2005-01-01
A computational investigation of a two-dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. Several design cycles with the structured-grid, computational fluid dynamics code PAB3D and with experiments in the NASA Langley Research Center Jet Exit Test Facility have been completed to guide the nozzle design and analyze performance. This paper presents computational results on potential design improvements for best experimental configuration tested to date. Nozzle design variables included cavity divergence angle, cavity convergence angle and upstream throat height. Pulsed fluidic injection was also investigated for its ability to decrease mass flow requirements. Internal nozzle performance (wind-off conditions) and thrust vector angles were computed for several configurations over a range of nozzle pressure ratios from 2 to 7, with the fluidic injection flow rate equal to 3 percent of the primary flow rate. Computational results indicate that increasing cavity divergence angle beyond 10 is detrimental to thrust vectoring efficiency, while increasing cavity convergence angle from 20 to 30 improves thrust vectoring efficiency at nozzle pressure ratios greater than 2, albeit at the expense of discharge coefficient. Pulsed injection was no more efficient than steady injection for the Dual Throat Nozzle concept.
Twisted Vanes Would Enhance Fuel/Air Mixing In Turbines
NASA Technical Reports Server (NTRS)
Nguyen, H. Lee; Micklow, Gerald J.; Dogra, Anju S.
1994-01-01
Computations of flow show performance of high-shear airblast fuel injector in gas-turbine engine enhanced by use of appropriately proportioned twisted (instead of flat) dome swirl vanes. Resultant more nearly uniform fuel/air mixture burns more efficiently, emitting smaller amounts of nitrogen oxides. Twisted-vane high-shear airblast injectors also incorporated into paint sprayers, providing advantages of low pressure drop characteristic of airblast injectors in general and finer atomization of advanced twisted-blade design.
A Prudent Access Control Behavioral Intention Model for the Healthcare Domain
ERIC Educational Resources Information Center
Mussa, Constance C.
2011-01-01
In recent years, many health care organizations have begun to take advantage of computerized information systems to facilitate more effective and efficient management and processing of information. However, commensurate with the vastly innovative enhancements that computer technology has contributed to traditional paper-based health care…
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
Influence of wire-coil inserts on the thermo-hydraulic performance of a flat-plate solar collector
NASA Astrophysics Data System (ADS)
Herrero Martín, R.; García, A.; Pérez-García, J.
2012-11-01
Enhancement techniques can be applied to flat-plate liquid solar collectors towards more compact and efficient designs. For the typical operating mass flow rates in flat-plate solar collectors, the most suitable technique is inserted devices. Based on previous studies from the authors, wire coils were selected for enhancing heat transfer. This type of inserted device provides better results in laminar, transitional and low turbulence fluid flow regimes. To test the enhanced solar collector and compare with a standard one, an experimental side-by-side solar collector test bed was designed and constructed. The testing set up was fully designed following the requirements of EN12975-2 and allow us to accomplish performance tests under the same operating conditions (mass flow rate, inlet fluid temperature and weather conditions). This work presents the thermal efficiency curves of a commercial and an enhanced solar collector, for the standardized mass flow rate per unit of absorber area of 0.02 kg/sm2 (in useful engineering units 144 kg/h for water as working fluid and 2 m2 flat-plate solar collector of absorber area). The enhanced collector was modified inserting spiral wire coils of dimensionless pitch p/D = 1 and wire-diameter e/D = 0.0717. The friction factor per tube has been computed from the overall pressure drop tests across the solar collectors. The thermal efficiency curves of both solar collectors, a standard and an enhanced collector, are presented. The enhanced solar collector increases the thermal efficiency by 15%. To account for the overall enhancement a modified performance evaluation criterion (R3m) is proposed. The maximum value encountered reaches 1.105 which represents an increase in useful power of 10.5% for the same pumping power consumption.
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
Time and learning efficiency in Internet-based learning: a systematic review and meta-analysis.
Cook, David A; Levinson, Anthony J; Garside, Sarah
2010-12-01
Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based instruction are associated with improved learning efficiency. we searched databases including MEDLINE, CINAHL, EMBASE, and ERIC from 1990 through November 2008. STUDY SELECTION AND DATA ABSTRACTION we included all studies quantifying learning time for Internet-based instruction for health professionals, compared with other instruction. Reviewers worked independently, in duplicate, to abstract information on interventions, outcomes, and study design. we identified 20 eligible studies. Random effects meta-analysis of 8 studies comparing Internet-based with non-Internet instruction (positive numbers indicating Internet longer) revealed pooled effect size (ES) for time -0.10 (p = 0.63). Among comparisons of two Internet-based interventions, providing feedback adds time (ES 0.67, p =0.003, two studies), and greater interactivity generally takes longer (ES 0.25, p = 0.089, five studies). One study demonstrated that adapting to learner prior knowledge saves time without significantly affecting knowledge scores. Other studies revealed that audio narration, video clips, interactive models, and animations increase learning time but also facilitate higher knowledge and/or satisfaction. Across all studies, time correlated positively with knowledge outcomes (r = 0.53, p = 0.021). on average, Internet-based instruction and non-computer instruction require similar time. Instructional strategies to enhance feedback and interactivity typically prolong learning time, but in many cases also enhance learning outcomes. Isolated examples suggest potential for improving efficiency in Internet-based instruction.
Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver
NASA Technical Reports Server (NTRS)
Parikh, Paresh
2001-01-01
A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.
Advances in Computational Capabilities for Hypersonic Flows
NASA Technical Reports Server (NTRS)
Kumar, Ajay; Gnoffo, Peter A.; Moss, James N.; Drummond, J. Philip
1997-01-01
The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.
Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam
2018-05-26
The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.
Improved CO sub 2 enhanced oil recovery -- Mobility control by in-situ chemical precipitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameri, S.; Aminian, K.; Wasson, J.A.
1991-06-01
The overall objective of this study has been to evaluate the feasibility of chemical precipitation to improve CO{sub 2} sweep efficiency and mobility control. The laboratory experiments have indicated that carbonate precipitation can alter the permeability of the core samples under reservoir conditions. Furthermore, the relative permeability measurements have revealed that precipitation reduces the gas permeability in favor of liquid permeability. This indicates that precipitation is occurring preferentially in the larger pores. Additional experimental work with a series of connected cores have indicated that the permeability profile can be successfully modified. However, Ph control plays a critical role in propagationmore » of the chemical precipitation reaction. A numerical reservoir model has been utilized to evaluate the effects of permeability heterogeneity and permeability modification on the CO{sub 2} sweep efficiency. The computer simulation results indicate that the permeability profile modification can significantly enhance CO{sub 2} vertical and horizontal sweep efficiencies. The scoping studies with the model have further revealed that only a fraction of high permeability zones need to be altered to achieve sweep efficiency enhancement. 64 refs., 30 figs., 16 tabs.« less
A variational eigenvalue solver on a photonic quantum processor
Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.
2014-01-01
Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.
Kubitzki, Marcus B; de Groot, Bert L
2007-06-15
Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.
Using speech recognition to enhance the Tongue Drive System functionality in computer access.
Huo, Xueliang; Ghovanloo, Maysam
2011-01-01
Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dynamic contrast enhanced CT in nodule characterization: How we review and report.
Qureshi, Nagmi R; Shah, Andrew; Eaton, Rosemary J; Miles, Ken; Gilbert, Fiona J
2016-07-18
Incidental indeterminate solitary pulmonary nodules (SPN) that measure less than 3 cm in size are an increasingly common finding on computed tomography (CT) worldwide. Once identified there are a number of imaging strategies that can be performed to help with nodule characterization. These include interval CT, dynamic contrast enhanced computed tomography (DCE-CT), (18)F-fluorodeoxyglucose positron emission tomography-computed tomography ((18)F-FDG-PET-CT). To date the most cost effective and efficient non-invasive test or combination of tests for optimal nodule characterization has yet to be determined.DCE-CT is a functional test that involves the acquisition of a dynamic series of images of a nodule before and following the administration of intravenous iodinated contrast medium. This article provides an overview of the current indications and limitations of DCE- CT in nodule characterization and a systematic approach to how to perform, analyse and interpret a DCE-CT scan.
VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal
NASA Astrophysics Data System (ADS)
Satheeskumaran, S.; Sabrigiriraj, M.
2016-06-01
Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.
NASA Technical Reports Server (NTRS)
Rathjen, K. A.; Burk, H. O.
1983-01-01
The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.
Computer enhancement through interpretive techniques
NASA Technical Reports Server (NTRS)
Foster, G.; Spaanenburg, H. A. E.; Stumpf, W. E.
1972-01-01
The improvement in the usage of the digital computer through the use of the technique of interpretation rather than the compilation of higher ordered languages was investigated by studying the efficiency of coding and execution of programs written in FORTRAN, ALGOL, PL/I and COBOL. FORTRAN was selected as the high level language for examining programs which were compiled, and A Programming Language (APL) was chosen for the interpretive language. It is concluded that APL is competitive, not because it and the algorithms being executed are well written, but rather because the batch processing is less efficient than has been admitted. There is not a broad base of experience founded on trying different implementation strategies which have been targeted at open competition with traditional processing methods.
Parallel-vector out-of-core equation solver for computational mechanics
NASA Technical Reports Server (NTRS)
Qin, J.; Agarwal, T. K.; Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.
1993-01-01
A parallel/vector out-of-core equation solver is developed for shared-memory computers, such as the Cray Y-MP machine. The input/ output (I/O) time is reduced by using the a synchronous BUFFER IN and BUFFER OUT, which can be executed simultaneously with the CPU instructions. The parallel and vector capability provided by the supercomputers is also exploited to enhance the performance. Numerical applications in large-scale structural analysis are given to demonstrate the efficiency of the present out-of-core solver.
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
NASA Astrophysics Data System (ADS)
Yu, Hyeonseung; Lee, KyeoReh; Park, YongKeun
2017-02-01
Developing an efficient strategy for light focusing through scattering media is an important topic in the study of multiple light scattering. The enhancement factor of the light focusing, defined as the ratio between the optimized intensity and the background intensity is proportional to the number of controlling modes in a spatial light modulator (SLM). The demonstrated enhancement factors in previous studies are typically less than 1,000 due to several limiting factors, such as the slow refresh rate of a LCoS SLM, long optimization time, and lack of an efficient algorithm for high controlling modes. A digital micro-mirror device is an amplitude modulator, which is recently widely used for fast optimization through dynamic biological tissues. The fast frame rate of the DMD up to 16 kHz can also be exploited for increasing the number of controlling modes. However, the manipulation of large pattern data and efficient calculation of the optimized pattern remained as an issue. In this work, we demonstrate the enhancement factor more than 100,000 in focusing through scattering media by using 1 Mega controlling modes of a DMD. Through careful synchronization between a DMD, a photo-detector and an additional computer for parallel optimization, we achieved the unprecedented enhancement factor with 75 mins of the optimization time. We discuss the design principles of the system and the possible applications of the enhanced light focusing.
Static Extended Trailing Edge for Lift Enhancement: Experimental and Computational Studies
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Montefort; Liou, William W.; Pantula, Srinivasa R.; Shams, Qamar A.
2007-01-01
A static extended trailing edge attached to a NACA0012 airfoil section is studied for achieving lift enhancement at a small drag penalty. It is indicated that the thin extended trailing edge can enhance the lift while the zero-lift drag is not significantly increased. Experiments and calculations are conducted to compare the aerodynamic characteristics of the extended trailing edge with those of Gurney flap and conventional flap. The extended trailing edge, as a simple mechanical device added on a wing without altering the basic configuration, has a good potential to improve the cruise flight efficiency.
Alternate Multilayer Gratings with Enhanced Diffraction Efficiency in the 500-5000 eV Energy Domain
NASA Astrophysics Data System (ADS)
Polack, François; Lagarde, Bruno; Idir, Mourad; Cloup, Audrey Liard; Jourdain, Erick; Roulliay, Marc; Delmotte, Franck; Gautier, Julien; Ravet-Krill, Marie-Françoise
2007-01-01
An alternate multilayer (AML) grating is a 2 dimensional diffraction structure formed on an optical surface, having a 0.5 duty cycle in the in-plane and in the in-depth direction. It can be made by covering a shallow depth laminar grating with a multilayer stack. We show here that their 2D structure confer AML gratings a high angular and energetic selectivity and therefore enhanced diffraction properties, when used in grazing incidence. In the tender X-ray range (500eV - 5000 eV) they behave much like blazed gratings. Over 15% efficiency has been measured on a 1200 lines/mm Mo/Si AML grating in the 1.2 - 1.5 keV energy range. Computer simulations show that selected multilayer materials such as Cr/C should allow diffraction efficiency over 50% at photon energies over 3 keV.
Increasingly mobile: How new technologies can enhance qualitative research
Moylan, Carrie Ann; Derr, Amelia Seraphia; Lindhorst, Taryn
2015-01-01
Advances in technology, such as the growth of smart phones, tablet computing, and improved access to the internet have resulted in many new tools and applications designed to increase efficiency and improve workflow. Some of these tools will assist scholars using qualitative methods with their research processes. We describe emerging technologies for use in data collection, analysis, and dissemination that each offer enhancements to existing research processes. Suggestions for keeping pace with the ever-evolving technological landscape are also offered. PMID:25798072
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing
NASA Astrophysics Data System (ADS)
Meng, Xiang
The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In particular, parallel computing are forms of computation operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. In this dissertation, we report a series of new nanophotonic developments using the advanced parallel computing techniques. The applications include the structure optimizations at the nanoscale to control both the electromagnetic response of materials, and to manipulate nanoscale structures for enhanced field concentration, which enable breakthroughs in imaging, sensing systems (chapter 3 and 4) and improve the spatial-temporal resolutions of spectroscopies (chapter 5). We also report the investigations on the confinement study of optical-matter interactions at the quantum mechanical regime, where the size-dependent novel properties enhanced a wide range of technologies from the tunable and efficient light sources, detectors, to other nanophotonic elements with enhanced functionality (chapter 6 and 7).
Enhancing Image Findability through a Dual-Perspective Navigation Framework
ERIC Educational Resources Information Center
Lin, Yi-Ling
2013-01-01
This dissertation focuses on investigating whether users will locate desired images more efficiently and effectively when they are provided with information descriptors from both experts and the general public. This study develops a way to support image finding through a human-computer interface by providing subject headings and social tags about…
A hybrid architecture for the implementation of the Athena neural net model
NASA Technical Reports Server (NTRS)
Koutsougeras, C.; Papachristou, C.
1989-01-01
The implementation of an earlier introduced neural net model for pattern classification is considered. Data flow principles are employed in the development of a machine that efficiently implements the model and can be useful for real time classification tasks. Further enhancement with optical computing structures is also considered.
USDA-ARS?s Scientific Manuscript database
BACKGROUND: Next-generation sequencing projects commonly commence by aligning reads to a reference genome assembly. While improvements in alignment algorithms and computational hardware have greatly enhanced the efficiency and accuracy of alignments, a significant percentage of reads often remain u...
Image Perception Wavelet Simulation and Enhancement for the Visually Impaired.
1994-12-01
and Computational Harmonic Analysis, 1:54-81 (1993). 6. Cornsweet, Tom N. "The Staircase-Method in Psychophysics," The American Journal of Psychology ...of a Visual Model," Proceedings of the IEEE, 60(7):828-842 (July 1972). 33. Taylor, M. M. and C Douglas Creelman . "PEST: Efficient Estimates on
Computations of Combustion-Powered Actuation for Dynamic Stall Suppression
NASA Technical Reports Server (NTRS)
Jee, Solkeun; Bowles, Patrick O.; Matalanis, Claude G.; Min, Byung-Young; Wake, Brian E.; Crittenden, Tom; Glezer, Ari
2016-01-01
A computational framework for the simulation of dynamic stall suppression with combustion-powered actuation (COMPACT) is validated against wind tunnel experimental results on a VR-12 airfoil. COMPACT slots are located at 10% chord from the leading edge of the airfoil and directed tangentially along the suction-side surface. Helicopter rotor-relevant flow conditions are used in the study. A computationally efficient two-dimensional approach, based on unsteady Reynolds-averaged Navier-Stokes (RANS), is compared in detail against the baseline and the modified airfoil with COMPACT, using aerodynamic forces, pressure profiles, and flow-field data. The two-dimensional RANS approach predicts baseline static and dynamic stall very well. Most of the differences between the computational and experimental results are within two standard deviations of the experimental data. The current framework demonstrates an ability to predict COMPACT efficacy across the experimental dataset. Enhanced aerodynamic lift on the downstroke of the pitching cycle due to COMPACT is well predicted, and the cycleaveraged lift enhancement computed is within 3% of the test data. Differences with experimental data are discussed with a focus on three-dimensional features not included in the simulations and the limited computational model for COMPACT.
Lee, Chany; Jung, Young-Jin; Lee, Sang Jun; Im, Chang-Hwan
2017-02-01
Since there is no way to measure electric current generated by transcranial direct current stimulation (tDCS) inside the human head through in vivo experiments, numerical analysis based on the finite element method has been widely used to estimate the electric field inside the head. In 2013, we released a MATLAB toolbox named COMETS, which has been used by a number of groups and has helped researchers to gain insight into the electric field distribution during stimulation. The aim of this study was to develop an advanced MATLAB toolbox, named COMETS2, for the numerical analysis of the electric field generated by tDCS. COMETS2 can generate any sizes of rectangular pad electrodes on any positions on the scalp surface. To reduce the large computational burden when repeatedly testing multiple electrode locations and sizes, a new technique to decompose the global stiffness matrix was proposed. As examples of potential applications, we observed the effects of sizes and displacements of electrodes on the results of electric field analysis. The proposed mesh decomposition method significantly enhanced the overall computational efficiency. We implemented an automatic electrode modeler for the first time, and proposed a new technique to enhance the computational efficiency. In this paper, an efficient toolbox for tDCS analysis is introduced (freely available at http://www.cometstool.com). It is expected that COMETS2 will be a useful toolbox for researchers who want to benefit from the numerical analysis of electric fields generated by tDCS. Copyright © 2016. Published by Elsevier B.V.
Kruskal-Wallis-based computationally efficient feature selection for face recognition.
Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Identifying Factors that Contribute to the Satisfaction of Students in E-Learning
ERIC Educational Resources Information Center
Calli, Levent; Balcikanli, Cem; Calli, Fatih; Cebeci, Halil Ibrahim; Seymen, Omer Faruk
2013-01-01
There has been an increasing interest in the application of e-learning through the enhancement of internet and computer technologies. Satisfaction has appeared as a key factor in order to develop efficient course content in line with students' demands and expectations. Thus, a lot of research has been conducted on the concept of satisfaction in…
NASA Technical Reports Server (NTRS)
Tomayko, James E.
1986-01-01
Twenty-five years of spacecraft onboard computer development have resulted in a better understanding of the requirements for effective, efficient, and fault tolerant flight computer systems. Lessons from eight flight programs (Gemini, Apollo, Skylab, Shuttle, Mariner, Voyager, and Galileo) and three reserach programs (digital fly-by-wire, STAR, and the Unified Data System) are useful in projecting the computer hardware configuration of the Space Station and the ways in which the Ada programming language will enhance the development of the necessary software. The evolution of hardware technology, fault protection methods, and software architectures used in space flight in order to provide insight into the pending development of such items for the Space Station are reviewed.
Hadoop-MCC: Efficient Multiple Compound Comparison Algorithm Using Hadoop.
Hua, Guan-Jie; Hung, Che-Lun; Tang, Chuan Yi
2018-01-01
In the past decade, the drug design technologies have been improved enormously. The computer-aided drug design (CADD) has played an important role in analysis and prediction in drug development, which makes the procedure more economical and efficient. However, computation with big data, such as ZINC containing more than 60 million compounds data and GDB-13 with more than 930 million small molecules, is a noticeable issue of time-consuming problem. Therefore, we propose a novel heterogeneous high performance computing method, named as Hadoop-MCC, integrating Hadoop and GPU, to copy with big chemical structure data efficiently. Hadoop-MCC gains the high availability and fault tolerance from Hadoop, as Hadoop is used to scatter input data to GPU devices and gather the results from GPU devices. Hadoop framework adopts mapper/reducer computation model. In the proposed method, mappers response for fetching SMILES data segments and perform LINGO method on GPU, then reducers collect all comparison results produced by mappers. Due to the high availability of Hadoop, all of LINGO computational jobs on mappers can be completed, even if some of the mappers encounter problems. A comparison of LINGO is performed on each the GPU device in parallel. According to the experimental results, the proposed method on multiple GPU devices can achieve better computational performance than the CUDA-MCC on a single GPU device. Hadoop-MCC is able to achieve scalability, high availability, and fault tolerance granted by Hadoop, and high performance as well by integrating computational power of both of Hadoop and GPU. It has been shown that using the heterogeneous architecture as Hadoop-MCC effectively can enhance better computational performance than on a single GPU device. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Customizing FP-growth algorithm to parallel mining with Charm++ library
NASA Astrophysics Data System (ADS)
Puścian, Marek
2017-08-01
This paper presents a frequent item mining algorithm that was customized to handle growing data repositories. The proposed solution applies Master Slave scheme to frequent pattern growth technique. Efficient utilization of available computation units is achieved by dynamic reallocation of tasks. Conditional frequent trees are assigned to parallel workers basing on their workload. Proposed enhancements have been successfully implemented using Charm++ library. This paper discusses results of the performance of parallelized FP-growth algorithm against different datasets. The approach has been illustrated with many experiments and measurements performed using multiprocessor and multithreaded computer.
Computer tools for systems engineering at LaRC
NASA Technical Reports Server (NTRS)
Walters, J. Milam
1994-01-01
The Systems Engineering Office (SEO) has been established to provide life cycle systems engineering support to Langley research Center projects. over the last two years, the computing market has been reviewed for tools which could enhance the effectiveness and efficiency of activities directed towards this mission. A group of interrelated applications have been procured, or are under development including a requirements management tool, a system design and simulation tool, and project and engineering data base. This paper will review the current configuration of these tools and provide information on future milestones and directions.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Enabling smart personalized healthcare: a hybrid mobile-cloud approach for ECG telemonitoring.
Wang, Xiaoliang; Gui, Qiong; Liu, Bingwei; Jin, Zhanpeng; Chen, Yu
2014-05-01
The severe challenges of the skyrocketing healthcare expenditure and the fast aging population highlight the needs for innovative solutions supporting more accurate, affordable, flexible, and personalized medical diagnosis and treatment. Recent advances of mobile technologies have made mobile devices a promising tool to manage patients' own health status through services like telemedicine. However, the inherent limitations of mobile devices make them less effective in computation- or data-intensive tasks such as medical monitoring. In this study, we propose a new hybrid mobile-cloud computational solution to enable more effective personalized medical monitoring. To demonstrate the efficacy and efficiency of the proposed approach, we present a case study of mobile-cloud based electrocardiograph monitoring and analysis and develop a mobile-cloud prototype. The experimental results show that the proposed approach can significantly enhance the conventional mobile-based medical monitoring in terms of diagnostic accuracy, execution efficiency, and energy efficiency, and holds the potential in addressing future large-scale data analysis in personalized healthcare.
Development of Semi-Automatic Lathe by using Intelligent Soft Computing Technique
NASA Astrophysics Data System (ADS)
Sakthi, S.; Niresh, J.; Vignesh, K.; Anand Raj, G.
2018-03-01
This paper discusses the enhancement of conventional lathe machine to semi-automated lathe machine by implementing a soft computing method. In the present scenario, lathe machine plays a vital role in the engineering division of manufacturing industry. While the manual lathe machines are economical, the accuracy and efficiency are not up to the mark. On the other hand, CNC machine provide the desired accuracy and efficiency, but requires a huge capital. In order to over come this situation, a semi-automated approach towards the conventional lathe machine is developed by employing stepper motors to the horizontal and vertical drive, that can be controlled by Arduino UNO -microcontroller. Based on the input parameters of the lathe operation the arduino coding is been generated and transferred to the UNO board. Thus upgrading from manual to semi-automatic lathe machines can significantly increase the accuracy and efficiency while, at the same time, keeping a check on investment cost and consequently provide a much needed escalation to the manufacturing industry.
Performance and reliability enhancement of linear coolers
NASA Astrophysics Data System (ADS)
Mai, M.; Rühlich, I.; Schreiter, A.; Zehner, S.
2010-04-01
Highest efficiency states a crucial requirement for modern tactical IR cryocooling systems. For enhancement of overall efficiency, AIM cryocooler designs where reassessed considering all relevant loss mechanisms and associated components. Performed investigation was based on state-of-the-art simulation software featuring magnet circuitry analysis as well as computational fluid dynamics (CFD) to realistically replicate thermodynamic interactions. As a result, an improved design for AIM linear coolers could be derived. This paper gives an overview on performance enhancement activities and major results. An additional key-requirement for cryocoolers is reliability. In recent time, AIM has introduced linear coolers with full Flexure Bearing suspension on both ends of the driving mechanism incorporating Moving Magnet piston drive. In conjunction with a Pulse-Tube coldfinger these coolers are capable of meeting MTTF's (Mean Time To Failure) in excess of 50,000 hours offering superior reliability for space applications. Ongoing development also focuses on reliability enhancement, deriving space technology into tactical solutions combining both, excelling specific performance with space like reliability. Concerned publication will summarize the progress of this reliability program and give further prospect.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Using Speech Recognition to Enhance the Tongue Drive System Functionality in Computer Access
Huo, Xueliang; Ghovanloo, Maysam
2013-01-01
Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing. PMID:22255801
On configurational forces for gradient-enhanced inelasticity
NASA Astrophysics Data System (ADS)
Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth
2018-04-01
In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.
Metasurface holograms reaching 80% efficiency.
Zheng, Guoxing; Mühlenbernd, Holger; Kenney, Mitchell; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2015-04-01
Surfaces covered by ultrathin plasmonic structures--so-called metasurfaces--have recently been shown to be capable of completely controlling the phase of light, representing a new paradigm for the design of innovative optical elements such as ultrathin flat lenses, directional couplers for surface plasmon polaritons and wave plate vortex beam generation. Among the various types of metasurfaces, geometric metasurfaces, which consist of an array of plasmonic nanorods with spatially varying orientations, have shown superior phase control due to the geometric nature of their phase profile. Metasurfaces have recently been used to make computer-generated holograms, but the hologram efficiency remained too low at visible wavelengths for practical purposes. Here, we report the design and realization of a geometric metasurface hologram reaching diffraction efficiencies of 80% at 825 nm and a broad bandwidth between 630 nm and 1,050 nm. The 16-level-phase computer-generated hologram demonstrated here combines the advantages of a geometric metasurface for the superior control of the phase profile and of reflectarrays for achieving high polarization conversion efficiency. Specifically, the design of the hologram integrates a ground metal plane with a geometric metasurface that enhances the conversion efficiency between the two circular polarization states, leading to high diffraction efficiency without complicating the fabrication process. Because of these advantages, our strategy could be viable for various practical holographic applications.
Optimized distributed computing environment for mask data preparation
NASA Astrophysics Data System (ADS)
Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung
2005-11-01
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-01-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517
Li, Congcong; Zhang, Xi; Wang, Haiping; Li, Dongfeng
2018-01-11
Vehicular sensor networks have been widely applied in intelligent traffic systems in recent years. Because of the specificity of vehicular sensor networks, they require an enhanced, secure and efficient authentication scheme. Existing authentication protocols are vulnerable to some problems, such as a high computational overhead with certificate distribution and revocation, strong reliance on tamper-proof devices, limited scalability when building many secure channels, and an inability to detect hardware tampering attacks. In this paper, an improved authentication scheme using certificateless public key cryptography is proposed to address these problems. A security analysis of our scheme shows that our protocol provides an enhanced secure anonymous authentication, which is resilient against major security threats. Furthermore, the proposed scheme reduces the incidence of node compromise and replication attacks. The scheme also provides a malicious-node detection and warning mechanism, which can quickly identify compromised static nodes and immediately alert the administrative department. With performance evaluations, the scheme can obtain better trade-offs between security and efficiency than the well-known available schemes.
A computer vision for animal ecology.
Weinstein, Ben G
2018-05-01
A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Integrating structure-based and ligand-based approaches for computational drug design.
Wilson, Gregory L; Lill, Markus A
2011-04-01
Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.
NASA Technical Reports Server (NTRS)
Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William
2012-01-01
AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.
ERIC Educational Resources Information Center
Plavnick, Joshua B.
2012-01-01
Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…
ERIC Educational Resources Information Center
Shih, Ching-Hsiang
2013-01-01
This study provided that people with multiple disabilities can have a collaborative working chance in computer operations through an Enhanced Multiple Cursor Dynamic Pointing Assistive Program (EMCDPAP, a new kind of software that replaces the standard mouse driver, changes a mouse wheel into a thumb/finger poke detector, and manages mouse…
Enzyme stabilization via computationally guided protein stapling.
Moore, Eric J; Zorine, Dmitri; Hansen, William A; Khare, Sagar D; Fasan, Rudi
2017-11-21
Thermostabilization represents a critical and often obligatory step toward enhancing the robustness of enzymes for organic synthesis and other applications. While directed evolution methods have provided valuable tools for this purpose, these protocols are laborious and time-consuming and typically require the accumulation of several mutations, potentially at the expense of catalytic function. Here, we report a minimally invasive strategy for enzyme stabilization that relies on the installation of genetically encoded, nonreducible covalent staples in a target protein scaffold using computational design. This methodology enables the rapid development of myoglobin-based cyclopropanation biocatalysts featuring dramatically enhanced thermostability (Δ T m = +18.0 °C and Δ T 50 = +16.0 °C) as well as increased stability against chemical denaturation [Δ C m (GndHCl) = 0.53 M], without altering their catalytic efficiency and stereoselectivity properties. In addition, the stabilized variants offer superior performance and selectivity compared with the parent enzyme in the presence of a high concentration of organic cosolvents, enabling the more efficient cyclopropanation of a water-insoluble substrate. This work introduces and validates an approach for protein stabilization which should be applicable to a variety of other proteins and enzymes.
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, V.
2012-09-01
As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.
Reeves, Rustin E; Aschenbrenner, John E; Wordinger, Robert J; Roque, Rouel S; Sheedlo, Harold J
2004-05-01
The need to increase the efficiency of dissection in the gross anatomy laboratory has been the driving force behind the technologic changes we have recently implemented. With the introduction of an integrated systems-based medical curriculum and a reduction in laboratory teaching hours, anatomy faculty at the University of North Texas Health Science Center (UNTHSC) developed a computer-based dissection manual to adjust to these curricular changes and time constraints. At each cadaver workstation, Apple iMac computers were added and a new dissection manual, running in a browser-based format, was installed. Within the text of the manual, anatomical structures required for dissection were linked to digital images from prosected materials; in addition, for each body system, the dissection manual included images from cross sections, radiographs, CT scans, and histology. Although we have placed a high priority on computerization of the anatomy laboratory, we remain strong advocates of the importance of cadaver dissection. It is our belief that the utilization of computers for dissection is a natural evolution of technology and fosters creative teaching strategies adapted for anatomy laboratories in the 21st century. Our strategy has significantly enhanced the independence and proficiency of our students, the efficiency of their dissection time, and the quality of laboratory instruction by the faculty. Copyright 2004 Wiley-Liss, Inc.
Design of a portable artificial heart drive system based on efficiency analysis.
Kitamura, T
1986-11-01
This paper discusses a computer simulation of a pneumatic portable piston-type artificial heart drive system with a linear d-c-motor. The purpose of the design is to obtain an artificial heart drive system with high efficiency and small dimensions to enhance portability. The design employs two factors contributing the total efficiency of the drive system. First, the dimensions of the pneumatic actuator were optimized under a cost function of the total efficiency. Second, the motor performance was studied in terms of efficiency. More than 50 percent of the input energy of the actuator with practical loads is consumed in the armature circuit in all linear d-c-motors with brushes. An optimal design is: the piston cross-sectional area of 10.5 cm2 cylinder longitudinal length of 10 cm. The total efficiency could be up to 25 percent by improving the gasket to reduce the frictional force.
NASA Technical Reports Server (NTRS)
Rockey, D. E.
1979-01-01
A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew
2005-04-01
Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.
Heterogeneous multiscale Monte Carlo simulations for gold nanoparticle radiosensitization.
Martinov, Martin P; Thomson, Rowan M
2017-02-01
To introduce the heterogeneous multiscale (HetMS) model for Monte Carlo simulations of gold nanoparticle dose-enhanced radiation therapy (GNPT), a model characterized by its varying levels of detail on different length scales within a single phantom; to apply the HetMS model in two different scenarios relevant for GNPT and to compare computed results with others published. The HetMS model is implemented using an extended version of the EGSnrc user-code egs_chamber; the extended code is tested and verified via comparisons with recently published data from independent GNP simulations. Two distinct scenarios for the HetMS model are then considered: (a) monoenergetic photon beams (20 keV to 1 MeV) incident on a cylinder (1 cm radius, 3 cm length); (b) isotropic point source (brachytherapy source spectra) at the center of a 2.5 cm radius sphere with gold nanoparticles (GNPs) diffusing outwards from the center. Dose enhancement factors (DEFs) are compared for different source energies, depths in phantom, gold concentrations, GNP sizes, and modeling assumptions, as well as with independently published values. Simulation efficiencies are investigated. The HetMS MC simulations account for the competing effects of photon fluence perturbation (due to gold in the scatter media) coupled with enhanced local energy deposition (due to modeling discrete GNPs within subvolumes). DEFs are most sensitive to these effects for the lower source energies, varying with distance from the source; DEFs below unity (i.e., dose decreases, not enhancements) can occur at energies relevant for brachytherapy. For example, in the cylinder scenario, the 20 keV photon source has a DEF of 3.1 near the phantom's surface, decreasing to less than unity by 0.7 cm depth (for 20 mg/g). Compared to discrete modeling of GNPs throughout the gold-containing (treatment) volume, efficiencies are enhanced by up to a factor of 122 with the HetMS approach. For the spherical phantom, DEFs vary with time for diffusion, radionuclide, and radius; DEFs differ considerably from those computed using a widely applied analytic approach. By combining geometric models of varying complexity on different length scales within a single simulation, the HetMS model can effectively account for both macroscopic and microscopic effects which must both be considered for accurate computation of energy deposition and DEFs for GNPT. Efficiency gains with the HetMS approach enable diverse calculations which would otherwise be prohibitively long. The HetMS model may be extended to diverse scenarios relevant for GNPT, providing further avenues for research and development. © 2016 American Association of Physicists in Medicine.
Facile Affinity Maturation of Antibody Variable Domains Using Natural Diversity Mutagenesis
Tiller, Kathryn E.; Chowdhury, Ratul; Li, Tong; Ludwig, Seth D.; Sen, Sabyasachi; Maranas, Costas D.; Tessier, Peter M.
2017-01-01
The identification of mutations that enhance antibody affinity while maintaining high antibody specificity and stability is a time-consuming and laborious process. Here, we report an efficient methodology for systematically and rapidly enhancing the affinity of antibody variable domains while maximizing specificity and stability using novel synthetic antibody libraries. Our approach first uses computational and experimental alanine scanning mutagenesis to identify sites in the complementarity-determining regions (CDRs) that are permissive to mutagenesis while maintaining antigen binding. Next, we mutagenize the most permissive CDR positions using degenerate codons to encode wild-type residues and a small number of the most frequently occurring residues at each CDR position based on natural antibody diversity. This mutagenesis approach results in antibody libraries with variants that have a wide range of numbers of CDR mutations, including antibody domains with single mutations and others with tens of mutations. Finally, we sort the modest size libraries (~10 million variants) displayed on the surface of yeast to identify CDR mutations with the greatest increases in affinity. Importantly, we find that single-domain (VHH) antibodies specific for the α-synuclein protein (whose aggregation is associated with Parkinson’s disease) with the greatest gains in affinity (>5-fold) have several (four to six) CDR mutations. This finding highlights the importance of sampling combinations of CDR mutations during the first step of affinity maturation to maximize the efficiency of the process. Interestingly, we find that some natural diversity mutations simultaneously enhance all three key antibody properties (affinity, specificity, and stability) while other mutations enhance some of these properties (e.g., increased specificity) and display trade-offs in others (e.g., reduced affinity and/or stability). Computational modeling reveals that improvements in affinity are generally not due to direct interactions involving CDR mutations but rather due to indirect effects that enhance existing interactions and/or promote new interactions between the antigen and wild-type CDR residues. We expect that natural diversity mutagenesis will be useful for efficient affinity maturation of a wide range of antibody fragments and full-length antibodies. PMID:28928732
Distributed memory compiler design for sparse problems
NASA Technical Reports Server (NTRS)
Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema
1991-01-01
A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.
Computer-assisted diagnosis of melanoma.
Fuller, Collin; Cellura, A Paul; Hibler, Brian P; Burris, Katy
2016-03-01
The computer-assisted diagnosis of melanoma is an exciting area of research where imaging techniques are combined with diagnostic algorithms in an attempt to improve detection and outcomes for patients with skin lesions suspicious for malignancy. Once an image has been acquired, it undergoes a processing pathway which includes preprocessing, enhancement, segmentation, feature extraction, feature selection, change detection, and ultimately classification. Practicality for everyday clinical use remains a vital question. A successful model must obtain results that are on par or outperform experienced dermatologists, keep costs at a minimum, be user-friendly, and be time efficient with high sensitivity and specificity. ©2015 Frontline Medical Communications.
Network Community Detection based on the Physarum-inspired Computational Framework.
Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili
2016-12-13
Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.
New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1998-01-01
Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.
An Efficient VLSI Architecture of the Enhanced Three Step Search Algorithm
NASA Astrophysics Data System (ADS)
Biswas, Baishik; Mukherjee, Rohan; Saha, Priyabrata; Chakrabarti, Indrajit
2016-09-01
The intense computational complexity of any video codec is largely due to the motion estimation unit. The Enhanced Three Step Search is a popular technique that can be adopted for fast motion estimation. This paper proposes a novel VLSI architecture for the implementation of the Enhanced Three Step Search Technique. A new addressing mechanism has been introduced which enhances the speed of operation and reduces the area requirements. The proposed architecture when implemented in Verilog HDL on Virtex-5 Technology and synthesized using Xilinx ISE Design Suite 14.1 achieves a critical path delay of 4.8 ns while the area comes out to be 2.9K gate equivalent. It can be incorporated in commercial devices like smart-phones, camcorders, video conferencing systems etc.
Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis
NASA Astrophysics Data System (ADS)
Aggarwal, Preeti; Sardana, H. K.
2010-02-01
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.
Initiative for safe driving and enhanced utilization of crash data
NASA Astrophysics Data System (ADS)
Wagner, John F.
1994-03-01
This initiative addresses the utilization of current technology to increase the efficiency of police officers to complete required Driving Under the Influence (DUI) forms and to enhance their ability to acquire and record crash and accident information. The project is a cooperative program among the New Mexico Alliance for Transportation Research (ATR), Science Applications International Corporation (SAIC), Los Alamos National Laboratory, and the New Mexico State Highway and Transportation Department. The approach utilizes an in-car computer and associated sensors for information acquisition and recording. Los Alamos artificial intelligence technology is leveraged to ensure ease of data entry and use.
Inverse halftoning via robust nonlinear filtering
NASA Astrophysics Data System (ADS)
Shen, Mei-Yin; Kuo, C.-C. Jay
1999-10-01
A new blind inverse halftoning algorithm based on a nonlinear filtering technique of low computational complexity and low memory requirement is proposed in this research. It is called blind since we do not require the knowledge of the halftone kernel. The proposed scheme performs nonlinear filtering in conjunction with edge enhancement to improve the quality of an inverse halftoned image. Distinct features of the proposed approach include: efficiently smoothing halftone patterns in large homogeneous areas, additional edge enhancement capability to recover the edge quality and an excellent PSNR performance with only local integer operations and a small memory buffer.
Viscous fingering and channeling in chemical enhanced oil recovery
NASA Astrophysics Data System (ADS)
Daripa, Prabir; Dutta, Sourav
2017-11-01
We have developed a hybrid numerical method based on discontinuous finite element method and modified method of characteristics to compute the multiphase multicomponent fluid flow in porous media in the context of chemical enhanced oil recovery. We use this method to study the effect of various chemical components on the viscous fingering and channeling in rectilinear and radial flow configurations. We will also discuss about the efficiency of various flooding schemes based on these understandings. Time permitting, we will discuss about the effect of variable injection rates in these practical setting. U.S. National Science Foundation Grant DMS-1522782.
A numerical study of mixing enhancement in supersonic reacting flow fields. [in scramjets
NASA Technical Reports Server (NTRS)
Drummond, J. Philip; Mukunda, H. S.
1988-01-01
NASA Langley has intensively investigated the components of ramjet and scramjet systems for endoatmospheric, airbreathing hypersonic propulsion; attention is presently given to the optimization of scramjet combustor fuel-air mixing and reaction characteristics. A supersonic, spatially developing and reacting mixing layer has been found to serve as an excellent physical model for the mixing and reaction process. Attention is presently given to techniques that have been applied to the enhancement of the mixing processes and the overall combustion efficiency of the mixing layer. A fuel injector configuration has been computationally designed which significantly increases mixing and reaction rates.
Quantum-enhanced Sensing and Efficient Quantum Computation
2015-07-27
accuracy. The system was used to improve quantum boson sampling tests. 15. SUBJECT TERMS EOARD, Quantum Information Processing, Transition Edge Sensors...quantum boson sampling (QBS) problem are reported in Ref. [7]. To substantially increase the scale of feasible tests, we developed a new variation
Static Extended Trailing Edge for Lift Enhancement: Experimental and Computational Studies
2007-06-01
3rd International Symposium on Integrating CFD and Experiments in Aerodynamics 20-21 June 2007 U.S. Air Force Academy, CO, USA Static Extended...is not significantly increased. Experiments and calculations are conducted to compare the aerodynamic characteristics of the extended trailing edge...basic configuration, has a good potential to improve the cruise flight efficiency. Key words: trailing edge, airfoil, wing, lift, drag, aerodynamics
Deep learning with coherent nanophotonic circuits
NASA Astrophysics Data System (ADS)
Shen, Yichen; Harris, Nicholas C.; Skirlo, Scott; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Sun, Xin; Zhao, Shijie; Larochelle, Hugo; Englund, Dirk; Soljačić, Marin
2017-07-01
Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today's computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach-Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.
Computational Screening of 2D Materials for Photocatalysis.
Singh, Arunima K; Mathew, Kiran; Zhuang, Houlong L; Hennig, Richard G
2015-03-19
Two-dimensional (2D) materials exhibit a range of extraordinary electronic, optical, and mechanical properties different from their bulk counterparts with potential applications for 2D materials emerging in energy storage and conversion technologies. In this Perspective, we summarize the recent developments in the field of solar water splitting using 2D materials and review a computational screening approach to rapidly and efficiently discover more 2D materials that possess properties suitable for solar water splitting. Computational tools based on density-functional theory can predict the intrinsic properties of potential photocatalyst such as their electronic properties, optical absorbance, and solubility in aqueous solutions. Computational tools enable the exploration of possible routes to enhance the photocatalytic activity of 2D materials by use of mechanical strain, bias potential, doping, and pH. We discuss future research directions and needed method developments for the computational design and optimization of 2D materials for photocatalysis.
Wurdack, C M
1997-01-01
Computers are changing the way we do everything from paying our bills to programming our home entertainment systems. If you thought that dental education was not likely to benefit from computers, consider this: Computer technology is revolutionizing dental instruction in ways that promise to improve the quality and efficiency of dental education. It is providing a challenging learning opportunity for dental educators as well. Since much of dental education involves the visual transfer of both concepts and procedures from the instructor to the student, it makes sense that using computer technology to enhance conventional teaching techniques--with materials that include clear, informative images and real-time demonstrations melding sound and animation to deliver to the student in the classroom material that complements textbooks, 35mm slides, and the lecture format. Use of computers at UOP is about teaching students to be competent dentists by making instruction more direct, better visualized, and more comprehensible.
Fang, Jing-Jing; Liu, Jia-Kuang; Wu, Tzu-Chieh; Lee, Jing-Wei; Kuo, Tai-Hong
2013-05-01
Computer-aided design has gained increasing popularity in clinical practice, and the advent of rapid prototyping technology has further enhanced the quality and predictability of surgical outcomes. It provides target guides for complex bony reconstruction during surgery. Therefore, surgeons can efficiently and precisely target fracture restorations. Based on three-dimensional models generated from a computed tomographic scan, precise preoperative planning simulation on a computer is possible. Combining the interdisciplinary knowledge of surgeons and engineers, this study proposes a novel surgical guidance method that incorporates a built-in occlusal wafer that serves as the positioning reference.Two patients with complex facial deformity suffering from severe facial asymmetry problems were recruited. In vitro facial reconstruction was first rehearsed on physical models, where a customized surgical guide incorporating a built-in occlusal stent as the positioning reference was designed to implement the surgery plan. This study is intended to present the authors' preliminary experience in a complex facial reconstruction procedure. It suggests that in regions with less information, where intraoperative computed tomographic scans or navigation systems are not available, our approach could be an effective, expedient, straightforward aid to enhance surgical outcome in a complex facial repair.
Batching System for Superior Service
NASA Technical Reports Server (NTRS)
2001-01-01
Veridian's Portable Batch System (PBS) was the recipient of the 1997 NASA Space Act Award for outstanding software. A batch system is a set of processes for managing queues and jobs. Without a batch system, it is difficult to manage the workload of a computer system. By bundling the enterprise's computing resources, the PBS technology offers users a single coherent interface, resulting in efficient management of the batch services. Users choose which information to package into "containers" for system-wide use. PBS also provides detailed system usage data, a procedure not easily executed without this software. PBS operates on networked, multi-platform UNIX environments. Veridian's new version, PBS Pro,TM has additional features and enhancements, including support for additional operating systems. Veridian distributes the original version of PBS as Open Source software via the PBS website. Customers can register and download the software at no cost. PBS Pro is also available via the web and offers additional features such as increased stability, reliability, and fault tolerance.A company using PBS can expect a significant increase in the effective management of its computing resources. Tangible benefits include increased utilization of costly resources and enhanced understanding of computational requirements and user needs.
The use of self-organising maps for anomalous behaviour detection in a digital investigation.
Fei, B K L; Eloff, J H P; Olivier, M S; Venter, H S
2006-10-16
The dramatic increase in crime relating to the Internet and computers has caused a growing need for digital forensics. Digital forensic tools have been developed to assist investigators in conducting a proper investigation into digital crimes. In general, the bulk of the digital forensic tools available on the market permit investigators to analyse data that has been gathered from a computer system. However, current state-of-the-art digital forensic tools simply cannot handle large volumes of data in an efficient manner. With the advent of the Internet, many employees have been given access to new and more interesting possibilities via their desktop. Consequently, excessive Internet usage for non-job purposes and even blatant misuse of the Internet have become a problem in many organisations. Since storage media are steadily growing in size, the process of analysing multiple computer systems during a digital investigation can easily consume an enormous amount of time. Identifying a single suspicious computer from a set of candidates can therefore reduce human processing time and monetary costs involved in gathering evidence. The focus of this paper is to demonstrate how, in a digital investigation, digital forensic tools and the self-organising map (SOM)--an unsupervised neural network model--can aid investigators to determine anomalous behaviours (or activities) among employees (or computer systems) in a far more efficient manner. By analysing the different SOMs (one for each computer system), anomalous behaviours are identified and investigators are assisted to conduct the analysis more efficiently. The paper will demonstrate how the easy visualisation of the SOM enhances the ability of the investigators to interpret and explore the data generated by digital forensic tools so as to determine anomalous behaviours.
3D Space Radiation Transport in a Shielded ICRU Tissue Sphere
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.
An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption
NASA Astrophysics Data System (ADS)
Sun, Yanhua; Hao, Zhe; Zhang, Yanhua
2018-01-01
With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.
Hybrid discrete/continuum algorithms for stochastic reaction networks
Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...
2014-10-22
Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less
ElGamal cryptosystem with embedded compression-crypto technique
NASA Astrophysics Data System (ADS)
Mandangan, Arif; Yin, Lee Souk; Hung, Chang Ee; Hussin, Che Haziqah Che
2014-12-01
Key distribution problem in symmetric cryptography has been solved by the emergence of asymmetric cryptosystem. Due to its mathematical complexity, computation efficiency becomes a major problem in the real life application of asymmetric cryptosystem. This scenario encourage various researches regarding the enhancement of computation efficiency of asymmetric cryptosystems. ElGamal cryptosystem is one of the most established asymmetric cryptosystem. By using proper parameters, ElGamal cryptosystem is able to provide a good level of information security. On the other hand, Compression-Crypto technique is a technique used to reduce the number of plaintext to be encrypted from k∈ Z+, k > 2 plaintext become only 2 plaintext. Instead of encrypting k plaintext, we only need to encrypt these 2 plaintext. In this paper, we embed the Compression-Crypto technique into the ElGamal cryptosystem. To show that the embedded ElGamal cryptosystem works, we provide proofs on the decryption processes to recover the encrypted plaintext.
Fast adaptive flat-histogram ensemble to enhance the sampling in large systems
NASA Astrophysics Data System (ADS)
Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing
2015-09-01
An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Fuzzy wavelet plus a quantum neural network as a design base for power system stability enhancement.
Ganjefar, Soheil; Tofighi, Morteza; Karami, Hamidreza
2015-11-01
In this study, we introduce an indirect adaptive fuzzy wavelet neural controller (IAFWNC) as a power system stabilizer to damp inter-area modes of oscillations in a multi-machine power system. Quantum computing is an efficient method for improving the computational efficiency of neural networks, so we developed an identifier based on a quantum neural network (QNN) to train the IAFWNC in the proposed scheme. All of the controller parameters are tuned online based on the Lyapunov stability theory to guarantee the closed-loop stability. A two-machine, two-area power system equipped with a static synchronous series compensator as a series flexible ac transmission system was used to demonstrate the effectiveness of the proposed controller. The simulation and experimental results demonstrated that the proposed IAFWNC scheme can achieve favorable control performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Martin J.
This project was part of a coordinated software development effort which the nuclear physics lattice QCD community pursues in order to ensure that lattice calculations can make optimal use of present, and forthcoming leadership-class and dedicated hardware, including those of the national laboratories, and prepares for the exploitation of future computational resources in the exascale era. The UW team improved and extended software libraries used in lattice QCD calculations related to multi-nucleon systems, enhanced production running codes related to load balancing multi-nucleon production on large-scale computing platforms, and developed SQLite (addressable database) interfaces to efficiently archive and analyze multi-nucleon datamore » and developed a Mathematica interface for the SQLite databases.« less
An Enhanced Privacy-Preserving Authentication Scheme for Vehicle Sensor Networks.
Zhou, Yousheng; Zhao, Xiaofeng; Jiang, Yi; Shang, Fengjun; Deng, Shaojiang; Wang, Xiaojun
2017-12-08
Vehicle sensor networks (VSNs) are ushering in a promising future by enabling more intelligent transportation systems and providing a more efficient driving experience. However, because of their inherent openness, VSNs are subject to a large number of potential security threats. Although various authentication schemes have been proposed for addressing security problems, they are not suitable for VSN applications because of their high computation and communication costs. Chuang and Lee have developed a trust-extended authentication mechanism (TEAM) for vehicle-to-vehicle communication using a transitive trust relationship, which they claim can resist various attacks. However, it fails to counter internal attacks because of the utilization of a shared secret key. In this paper, to eliminate the vulnerability of TEAM, an enhanced privacy-preserving authentication scheme for VSNs is constructed. The security of our proposed scheme is proven under the random oracle model based on the assumption of the computational Diffie-Hellman problem.
An Enhanced Privacy-Preserving Authentication Scheme for Vehicle Sensor Networks
Zhou, Yousheng; Zhao, Xiaofeng; Jiang, Yi; Shang, Fengjun; Deng, Shaojiang; Wang, Xiaojun
2017-01-01
Vehicle sensor networks (VSNs) are ushering in a promising future by enabling more intelligent transportation systems and providing a more efficient driving experience. However, because of their inherent openness, VSNs are subject to a large number of potential security threats. Although various authentication schemes have been proposed for addressing security problems, they are not suitable for VSN applications because of their high computation and communication costs. Chuang and Lee have developed a trust-extended authentication mechanism (TEAM) for vehicle-to-vehicle communication using a transitive trust relationship, which they claim can resist various attacks. However, it fails to counter internal attacks because of the utilization of a shared secret key. In this paper, to eliminate the vulnerability of TEAM, an enhanced privacy-preserving authentication scheme for VSNs is constructed. The security of our proposed scheme is proven under the random oracle model based on the assumption of the computational Diffie–Hellman problem. PMID:29292792
Enhancing Team Composition in Professional Networks: Problem Definitions and Fast Solutions
Li, Liangyue; Tong, Hanghang; Cao, Nan; Ehrlich, Kate; Lin, Yu-Ru; Buchler, Norbou
2017-01-01
In this paper, we study ways to enhance the composition of teams based on new requirements in a collaborative environment. We focus on recommending team members who can maintain the team’s performance by minimizing changes to the team’s skills and social structure. Our recommendations are based on computing team-level similarity, which includes skill similarity, structural similarity as well as the synergy between the two. Current heuristic approaches are one-dimensional and not comprehensive, as they consider the two aspects independently. To formalize team-level similarity, we adopt the notion of graph kernel of attributed graphs to encompass the two aspects and their interaction. To tackle the computational challenges, we propose a family of fast algorithms by (a) designing effective pruning strategies, and (b) exploring the smoothness between the existing and the new team structures. Extensive empirical evaluations on real world datasets validate the effectiveness and efficiency of our algorithms. PMID:29104408
Preliminary results in large bone segmentation from 3D freehand ultrasound
NASA Astrophysics Data System (ADS)
Fanti, Zian; Torres, Fabian; Arámbula Cosío, Fernando
2013-11-01
Computer Assisted Orthopedic Surgery (CAOS) requires a correct registration between the patient in the operating room and the virtual models representing the patient in the computer. In order to increase the precision and accuracy of the registration a set of new techniques that eliminated the need to use fiducial markers have been developed. The majority of these newly developed registration systems are based on costly intraoperative imaging systems like Computed Tomography (CT scan) or Magnetic resonance imaging (MRI). An alternative to these methods is the use of an Ultrasound (US) imaging system for the implementation of a more cost efficient intraoperative registration solution. In order to develop the registration solution with the US imaging system, the bone surface is segmented in both preoperative and intraoperative images, and the registration is done using the acquire surface. In this paper, we present the a preliminary results of a new approach to segment bone surface from ultrasound volumes acquired by means 3D freehand ultrasound. The method is based on the enhancement of the voxels that belongs to surface and its posterior segmentation. The enhancement process is based on the information provided by eigenanalisis of the multiscale 3D Hessian matrix. The preliminary results shows that from the enhance volume the final bone surfaces can be extracted using a singular value thresholding.
Enhanced limonene production in cyanobacteria reveals photosynthesis limitations.
Wang, Xin; Liu, Wei; Xin, Changpeng; Zheng, Yi; Cheng, Yanbing; Sun, Su; Li, Runze; Zhu, Xin-Guang; Dai, Susie Y; Rentzepis, Peter M; Yuan, Joshua S
2016-12-13
Terpenes are the major secondary metabolites produced by plants, and have diverse industrial applications as pharmaceuticals, fragrance, solvents, and biofuels. Cyanobacteria are equipped with efficient carbon fixation mechanism, and are ideal cell factories to produce various fuel and chemical products. Past efforts to produce terpenes in photosynthetic organisms have gained only limited success. Here we engineered the cyanobacterium Synechococcus elongatus PCC 7942 to efficiently produce limonene through modeling guided study. Computational modeling of limonene flux in response to photosynthetic output has revealed the downstream terpene synthase as a key metabolic flux-controlling node in the MEP (2-C-methyl-d-erythritol 4-phosphate) pathway-derived terpene biosynthesis. By enhancing the downstream limonene carbon sink, we achieved over 100-fold increase in limonene productivity, in contrast to the marginal increase achieved through stepwise metabolic engineering. The establishment of a strong limonene flux revealed potential synergy between photosynthate output and terpene biosynthesis, leading to enhanced carbon flux into the MEP pathway. Moreover, we show that enhanced limonene flux would lead to NADPH accumulation, and slow down photosynthesis electron flow. Fine-tuning ATP/NADPH toward terpene biosynthesis could be a key parameter to adapt photosynthesis to support biofuel/bioproduct production in cyanobacteria.
Enhanced limonene production in cyanobacteria reveals photosynthesis limitations
Wang, Xin; Liu, Wei; Xin, Changpeng; Zheng, Yi; Cheng, Yanbing; Sun, Su; Li, Runze; Zhu, Xin-Guang; Dai, Susie Y.; Rentzepis, Peter M.; Yuan, Joshua S.
2016-01-01
Terpenes are the major secondary metabolites produced by plants, and have diverse industrial applications as pharmaceuticals, fragrance, solvents, and biofuels. Cyanobacteria are equipped with efficient carbon fixation mechanism, and are ideal cell factories to produce various fuel and chemical products. Past efforts to produce terpenes in photosynthetic organisms have gained only limited success. Here we engineered the cyanobacterium Synechococcus elongatus PCC 7942 to efficiently produce limonene through modeling guided study. Computational modeling of limonene flux in response to photosynthetic output has revealed the downstream terpene synthase as a key metabolic flux-controlling node in the MEP (2-C-methyl-d-erythritol 4-phosphate) pathway-derived terpene biosynthesis. By enhancing the downstream limonene carbon sink, we achieved over 100-fold increase in limonene productivity, in contrast to the marginal increase achieved through stepwise metabolic engineering. The establishment of a strong limonene flux revealed potential synergy between photosynthate output and terpene biosynthesis, leading to enhanced carbon flux into the MEP pathway. Moreover, we show that enhanced limonene flux would lead to NADPH accumulation, and slow down photosynthesis electron flow. Fine-tuning ATP/NADPH toward terpene biosynthesis could be a key parameter to adapt photosynthesis to support biofuel/bioproduct production in cyanobacteria. PMID:27911807
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Efficient and Effective Change Principles in Active Videogames
Fenner, Ashley A.; Howie, Erin K.; Feltz, Deborah L.; Gray, Cindy M.; Lu, Amy Shirong; Mueller, Florian “Floyd”; Simons, Monique; Barnett, Lisa M.
2015-01-01
Abstract Active videogames have the potential to enhance population levels of physical activity but have not been successful in achieving this aim to date. This article considers a range of principles that may be important to the design of effective and efficient active videogames from diverse discipline areas, including behavioral sciences (health behavior change, motor learning, and serious games), business production (marketing and sales), and technology engineering and design (human–computer interaction/ergonomics and flow). Both direct and indirect pathways to impact on population levels of habitual physical activity are proposed, along with the concept of a game use lifecycle. Examples of current active and sedentary electronic games are used to understand how such principles may be applied. Furthermore, limitations of the current usage of theoretical principles are discussed. A suggested list of principles for best practice in active videogame design is proposed along with suggested research ideas to inform practice to enhance physical activity. PMID:26181680
Efficient and Effective Change Principles in Active Videogames.
Straker, Leon M; Fenner, Ashley A; Howie, Erin K; Feltz, Deborah L; Gray, Cindy M; Lu, Amy Shirong; Mueller, Florian Floyd; Simons, Monique; Barnett, Lisa M
2015-02-01
Active videogames have the potential to enhance population levels of physical activity but have not been successful in achieving this aim to date. This article considers a range of principles that may be important to the design of effective and efficient active videogames from diverse discipline areas, including behavioral sciences (health behavior change, motor learning, and serious games), business production (marketing and sales), and technology engineering and design (human-computer interaction/ergonomics and flow). Both direct and indirect pathways to impact on population levels of habitual physical activity are proposed, along with the concept of a game use lifecycle. Examples of current active and sedentary electronic games are used to understand how such principles may be applied. Furthermore, limitations of the current usage of theoretical principles are discussed. A suggested list of principles for best practice in active videogame design is proposed along with suggested research ideas to inform practice to enhance physical activity.
Enhancement of Electrokinetically-Driven Flow Mixing in Microchannel with Added Side Channels
NASA Astrophysics Data System (ADS)
Yang, Ruey-Jen; Wu, Chien-Hsien; Tseng, Tzu-I; Huang, Sung-Bin; Lee, Gwo-Bin
2005-10-01
Electroosmotic flow (EOF) in microchannels is restricted to low Reynolds number regimes. Since the inertial forces are extremely weak in such regimes, turbulent conditions do not readily develop. Therefore, species mixing occurs primarily via diffusion, with the result that extended mixing channels are generally required. The present study considers a T-shaped microchannel configuration with a mixing channel of width W=280 μm. Computational fluid dynamics simulations and experiments were performed to investigate the influence on the mixing efficiency of various geometrical parameters, including the side-channel width, the side-channel separation, and the number of side-channel pairs. The influence of different applied voltages is also considered. The numerical results reveal that the mixing efficiency can be enhanced to yield a fourfold improvement by incorporating two pairs of side channels into the mixing channel. It was also found that the mixing performance depends significantly upon the magnitudes of the applied voltages.
Agarwal, Pratul K.
2015-11-24
A method for analysis, control, and manipulation for improvement of the chemical reaction rate of a protein-mediated reaction is provided. Enzymes, which typically comprise protein molecules, are very efficient catalysts that enhance chemical reaction rates by many orders of magnitude. Enzymes are widely used for a number of functions in chemical, biochemical, pharmaceutical, and other purposes. The method identifies key protein vibration modes that control the chemical reaction rate of the protein-mediated reaction, providing identification of the factors that enable the enzymes to achieve the high rate of reaction enhancement. By controlling these factors, the function of enzymes may be modulated, i.e., the activity can either be increased for faster enzyme reaction or it can be decreased when a slower enzyme is desired. This method provides an inexpensive and efficient solution by utilizing computer simulations, in combination with available experimental data, to build suitable models and investigate the enzyme activity.
Agarwal, Pratul K.
2013-04-09
A method for analysis, control, and manipulation for improvement of the chemical reaction rate of a protein-mediated reaction is provided. Enzymes, which typically comprise protein molecules, are very efficient catalysts that enhance chemical reaction rates by many orders of magnitude. Enzymes are widely used for a number of functions in chemical, biochemical, pharmaceutical, and other purposes. The method identifies key protein vibration modes that control the chemical reaction rate of the protein-mediated reaction, providing identification of the factors that enable the enzymes to achieve the high rate of reaction enhancement. By controlling these factors, the function of enzymes may be modulated, i.e., the activity can either be increased for faster enzyme reaction or it can be decreased when a slower enzyme is desired. This method provides an inexpensive and efficient solution by utilizing computer simulations, in combination with available experimental data, to build suitable models and investigate the enzyme activity.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
NASA Astrophysics Data System (ADS)
Albert, Brice J.; Pahng, Seong Ho; Alaniva, Nicholas; Sesti, Erika L.; Rand, Peter W.; Saliba, Edward P.; Scott, Faith J.; Choi, Eric J.; Barnes, Alexander B.
2017-10-01
Cryogenic sample temperatures can enhance NMR sensitivity by extending spin relaxation times to improve dynamic nuclear polarization (DNP) and by increasing Boltzmann spin polarization. We have developed an efficient heat exchanger with a liquid nitrogen consumption rate of only 90 L per day to perform magic-angle spinning (MAS) DNP experiments below 85 K. In this heat exchanger implementation, cold exhaust gas from the NMR probe is returned to the outer portion of a counterflow coil within an intermediate cooling stage to improve cooling efficiency of the spinning and variable temperature gases. The heat exchange within the counterflow coil is calculated with computational fluid dynamics to optimize the heat transfer. Experimental results using the novel counterflow heat exchanger demonstrate MAS DNP signal enhancements of 328 ± 3 at 81 ± 2 K, and 276 ± 4 at 105 ± 2 K.
Efficiency improvement of technological preparation of power equipment manufacturing
NASA Astrophysics Data System (ADS)
Milukov, I. A.; Rogalev, A. N.; Sokolov, V. P.; Shevchenko, I. V.
2017-11-01
Competitiveness of power equipment primarily depends on speeding-up the development and mastering of new equipment samples and technologies, enhancement of organisation and management of design, manufacturing and operation. Actual political, technological and economic conditions cause the acute need in changing the strategy and tactics of process planning. At that the issues of maintenance of equipment with simultaneous improvement of its efficiency and compatibility to domestically produced components are considering. In order to solve these problems, using the systems of computer-aided process planning for process design at all stages of power equipment life cycle is economically viable. Computer-aided process planning is developed for the purpose of improvement of process planning by using mathematical methods and optimisation of design and management processes on the basis of CALS technologies, which allows for simultaneous process design, process planning organisation and management based on mathematical and physical modelling of interrelated design objects and production system. An integration of computer-aided systems providing the interaction of informative and material processes at all stages of product life cycle is proposed as effective solution to the challenges in new equipment design and process planning.
Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency
NASA Astrophysics Data System (ADS)
Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark
2010-04-01
This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1988-01-01
Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.
Computer-aided diagnosis based on enhancement of degraded fundus photographs.
Jin, Kai; Zhou, Mei; Wang, Shaoze; Lou, Lixia; Xu, Yufeng; Ye, Juan; Qian, Dahong
2018-05-01
Retinal imaging is an important and effective tool for detecting retinal diseases. However, degraded images caused by the aberrations of the eye can disguise lesions, so that a diseased eye can be mistakenly diagnosed as normal. In this work, we propose a new image enhancement method to improve the quality of degraded images. A new method is used to enhance degraded-quality fundus images. In this method, the image is converted from the input RGB colour space to LAB colour space and then each normalized component is enhanced using contrast-limited adaptive histogram equalization. Human visual system (HVS)-based fundus image quality assessment, combined with diagnosis by experts, is used to evaluate the enhancement. The study included 191 degraded-quality fundus photographs of 143 subjects with optic media opacity. Objective quality assessment of image enhancement (range: 0-1) indicated that our method improved colour retinal image quality from an average of 0.0773 (variance 0.0801) to an average of 0.3973 (variance 0.0756). Following enhancement, area under curves (AUC) were 0.996 for the glaucoma classifier, 0.989 for the diabetic retinopathy (DR) classifier, 0.975 for the age-related macular degeneration (AMD) classifier and 0.979 for the other retinal diseases classifier. The relatively simple method for enhancing degraded-quality fundus images achieves superior image enhancement, as demonstrated in a qualitative HVS-based image quality assessment. This retinal image enhancement may, therefore, be employed to assist ophthalmologists in more efficient screening of retinal diseases and the development of computer-aided diagnosis. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.
2018-01-01
Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918
Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A
2018-04-01
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
NASA Astrophysics Data System (ADS)
Wu, T.; Li, T.; Li, J.; Wang, G.
2017-12-01
Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.
Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.
Enhanced conformational sampling using enveloping distribution sampling.
Lin, Zhixiong; van Gunsteren, Wilfred F
2013-10-14
To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.
Enhancing atlas based segmentation with multiclass linear classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr
Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible localmore » registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.« less
Yu, Si; Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min
2014-01-01
Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much.
Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min
2014-01-01
Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much. PMID:24688434
NASA Technical Reports Server (NTRS)
1986-01-01
Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.
ERIC Educational Resources Information Center
Swarlis, Linda L.
2008-01-01
The test scores of spatial ability for women lag behind those of men in many spatial tests. On the Mental Rotations Test (MRT), a significant gender gap has existed for over 20 years and continues to exist. High spatial ability has been linked to efficiencies in typical computing tasks including Web and database searching, text editing, and…
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-06-27
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-01-01
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946
Review of Slow-Wave Structures
NASA Technical Reports Server (NTRS)
Wallett, Thomas M.; Qureshi, A. Haq
1994-01-01
The majority of recent theoretical and experimental reports published in the literature dealing with helical slow-wave structures focus on the dispersion characteristics and their effects due to the finite helix wire thickness and attenuation, dielectric loading, metal loading, and the introduction of plasma. In many papers, an effective dielectric constant is used to take into account helix wire dimensions and conductivity losses, while the propagation constant of the signal and the interaction impedance of the structure are found to depend on the surface resistivity of the helix. Also, various dielectric supporting rods are simulated by one or several uniform cylinders having an effective dielectric constant, while metal vane loading and plasma effects are incorporated in the effective dielectric constant. The papers dealing with coupled cavities and folded or loaded wave guides describe equivalent circuit models, efficiency enhancement, and the prediction of instabilities for these structures. Equivalent circuit models of various structures are found using computer software programs SUPERFISH and TOUCHSTONE. Efficiency enhancement in tubes is achieved through dynamic velocity and phase adjusted tapers using computer techniques. The stability threshold of unwanted antisymmetric and higher order modes is predicted using SOS and MAGIC codes and the dependence of higher order modes on beam conductance, section length, and effective Q of a cavity is shown.
Li, Congcong; Zhang, Xi; Wang, Haiping; Li, Dongfeng
2018-01-01
Vehicular sensor networks have been widely applied in intelligent traffic systems in recent years. Because of the specificity of vehicular sensor networks, they require an enhanced, secure and efficient authentication scheme. Existing authentication protocols are vulnerable to some problems, such as a high computational overhead with certificate distribution and revocation, strong reliance on tamper-proof devices, limited scalability when building many secure channels, and an inability to detect hardware tampering attacks. In this paper, an improved authentication scheme using certificateless public key cryptography is proposed to address these problems. A security analysis of our scheme shows that our protocol provides an enhanced secure anonymous authentication, which is resilient against major security threats. Furthermore, the proposed scheme reduces the incidence of node compromise and replication attacks. The scheme also provides a malicious-node detection and warning mechanism, which can quickly identify compromised static nodes and immediately alert the administrative department. With performance evaluations, the scheme can obtain better trade-offs between security and efficiency than the well-known available schemes. PMID:29324719
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-07-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Some Thoughts Regarding Practical Quantum Computing
NASA Astrophysics Data System (ADS)
Ghoshal, Debabrata; Gomez, Richard; Lanzagorta, Marco; Uhlmann, Jeffrey
2006-03-01
Quantum computing has become an important area of research in computer science because of its potential to provide more efficient algorithmic solutions to certain problems than are possible with classical computing. The ability of performing parallel operations over an exponentially large computational space has proved to be the main advantage of the quantum computing model. In this regard, we are particularly interested in the potential applications of quantum computers to enhance real software systems of interest to the defense, industrial, scientific and financial communities. However, while much has been written in popular and scientific literature about the benefits of the quantum computational model, several of the problems associated to the practical implementation of real-life complex software systems in quantum computers are often ignored. In this presentation we will argue that practical quantum computation is not as straightforward as commonly advertised, even if the technological problems associated to the manufacturing and engineering of large-scale quantum registers were solved overnight. We will discuss some of the frequently overlooked difficulties that plague quantum computing in the areas of memories, I/O, addressing schemes, compilers, oracles, approximate information copying, logical debugging, error correction and fault-tolerant computing protocols.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Xie, Shuangshuang; Liu, Chenhao; Yu, Zichuan; Ren, Tao; Hou, Jiancun; Chen, Lihua; Huang, Lixiang; Cheng, Yue; Ji, Qian; Yin, Jianzhong; Zhang, Longjiang; Shen, Wen
2015-12-01
To explore the efficiency, cost, and time for examination of one-stop-shop gadoxetic acid disodium (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI) in preoperative evaluation for parent donors by comparing with multidetector computer tomography combined with conventional MR cholangiopancreatography (MDCT-MRCP). Forty parent donors were evaluated with MDCT-MRCP, and the other 40 sex-, age-, and weight-matched donors with Gd-EOB-DTPA-enhanced MRI. Anatomical variations and graft volume determined by pre- and intra-operative findings, costs and time for imaging were recorded. Image quality was ranked on a 4-point scale and compared between both groups. Gd-EOB-DTPA-enhanced MRI provided better image quality than MDCT-MRCP for the depiction of portal veins and bile ducts by both reviewers (p < 0.05), hepatic veins by one reviewer (p < 0.05), rather hepatic arteries by both reviewers (p < 0.01). Sixty-nine living donors proceeded to liver donation with all anatomical findings accurately confirmed by intra-operative findings. The "in-room" time of Gd-EOB-DTPA-enhanced MRI was 12 min longer than MDCT-MRCP. Gd-EOB-DTPA-enhanced MRI was cheaper than MDCT-MRCP (US$519.72 vs. US$631.85). One-stop-shop Gd-EOB-DTPA-enhanced MRI has similar diagnostic accuracy as MDCT-MRCP and can provide additional benefit in terms of costs and convenience in preoperative evaluation for parent donors. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Lin, Z.; Stamnes, S.; Jin, Z.; Laszlo, I.; Tsay, S. C.; Wiscombe, W. J.; Stamnes, K.
2015-01-01
A successor version 3 of DISORT (DISORT3) is presented with important upgrades that improve the accuracy, efficiency, and stability of the algorithm. Compared with version 2 (DISORT2 released in 2000) these upgrades include (a) a redesigned BRDF computation that improves both speed and accuracy, (b) a revised treatment of the single scattering correction, and (c) additional efficiency and stability upgrades for beam sources. In DISORT3 the BRDF computation is improved in the following three ways: (i) the Fourier decomposition is prepared "off-line", thus avoiding the repeated internal computations done in DISORT2; (ii) a large enough number of terms in the Fourier expansion of the BRDF is employed to guarantee accurate values of the expansion coefficients (default is 200 instead of 50 in DISORT2); (iii) in the post processing step the reflection of the direct attenuated beam from the lower boundary is included resulting in a more accurate single scattering correction. These improvements in the treatment of the BRDF have led to improved accuracy and a several-fold increase in speed. In addition, the stability of beam sources has been improved by removing a singularity occurring when the cosine of the incident beam angle is too close to the reciprocal of any of the eigenvalues. The efficiency for beam sources has been further improved from reducing by a factor of 2 (compared to DISORT2) the dimension of the linear system of equations that must be solved to obtain the particular solutions, and by replacing the LINPAK routines used in DISORT2 by LAPACK 3.5 in DISORT3. These beam source stability and efficiency upgrades bring enhanced stability and an additional 5-7% improvement in speed. Numerical results are provided to demonstrate and quantify the improvements in accuracy and efficiency of DISORT3 compared to DISORT2.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Johnston, Jennifer M.
2014-01-01
The majority of biological processes mediated by G Protein-Coupled Receptors (GPCRs) take place on timescales that are not conveniently accessible to standard molecular dynamics (MD) approaches, notwithstanding the current availability of specialized parallel computer architectures, and efficient simulation algorithms. Enhanced MD-based methods have started to assume an important role in the study of the rugged energy landscape of GPCRs by providing mechanistic details of complex receptor processes such as ligand recognition, activation, and oligomerization. We provide here an overview of these methods in their most recent application to the field. PMID:24158803
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Heat-driven liquid metal cooling device for the thermal management of a computer chip
NASA Astrophysics Data System (ADS)
Ma, Kun-Quan; Liu, Jing
2007-08-01
The tremendous heat generated in a computer chip or very large scale integrated circuit raises many challenging issues to be solved. Recently, liquid metal with a low melting point was established as the most conductive coolant for efficiently cooling the computer chip. Here, by making full use of the double merits of the liquid metal, i.e. superior heat transfer performance and electromagnetically drivable ability, we demonstrate for the first time the liquid-cooling concept for the thermal management of a computer chip using waste heat to power the thermoelectric generator (TEG) and thus the flow of the liquid metal. Such a device consumes no external net energy, which warrants it a self-supporting and completely silent liquid-cooling module. Experiments on devices driven by one or two stage TEGs indicate that a dramatic temperature drop on the simulating chip has been realized without the aid of any fans. The higher the heat load, the larger will be the temperature decrease caused by the cooling device. Further, the two TEGs will generate a larger current if a copper plate is sandwiched between them to enhance heat dissipation there. This new method is expected to be significant in future thermal management of a desk or notebook computer, where both efficient cooling and extremely low energy consumption are of major concern.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
CFD Research, Parallel Computation and Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1995-01-01
During the last five years, CFD has matured substantially. Pure CFD research remains to be done, but much of the focus has shifted to integration of CFD into the design process. The work under these cooperative agreements reflects this trend. The recent work, and work which is planned, is designed to enhance the competitiveness of the US aerospace industry. CFD and optimization approaches are being developed and tested, so that the industry can better choose which methods to adopt in their design processes. The range of computer architectures has been dramatically broadened, as the assumption that only huge vector supercomputers could be useful has faded. Today, researchers and industry can trade off time, cost, and availability, choosing vector supercomputers, scalable parallel architectures, networked workstations, or heterogenous combinations of these to complete required computations efficiently.
Computational Fact Checking from Knowledge Networks
Ciampaglia, Giovanni Luca; Shiralkar, Prashant; Rocha, Luis M.; Bollen, Johan; Menczer, Filippo; Flammini, Alessandro
2015-01-01
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation. PMID:26083336
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.
Users' evaluation of the Navy Computer-Assisted Medical Diagnosis System.
Merrill, L L; Pearsall, D M; Gauker, E D
1996-01-01
U.S. Navy Independent Duty Corpsmen (IDCs) aboard small ships and submarines are responsible for all clinical and related health care duties while at sea. During deployment, life-threatening illnesses sometimes require evacuation to a shore-based treatment facility. At-sea evacuations are dangerous, expensive, and may compromise the mission of the vessel. Therefore, Group Medical Officers and IDCs were trained to use the Navy Computer-Assisted Medical Diagnosis (NCAMD) system during deployment. They were then surveyed to evaluate the NCAMD system. Their responses show that NCAMD is a cost-efficient, user-friendly package. It is easy to learn, and is especially valuable for training in the diagnosis of chest and abdominal complaints. However, the delivery of patient care at sea would significantly improve if computer hardware were upgraded to current industry standards. Also, adding various computer peripheral devices, structured forms, and reference materials to the at-sea clinician's resources could enhance shipboard patient care.
Efficient Reformulation of HOTFGM: Heat Conduction with Variable Thermal Conductivity
NASA Technical Reports Server (NTRS)
Zhong, Yi; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)
2002-01-01
Functionally graded materials (FGMs) have become one of the major research topics in the mechanics of materials community during the past fifteen years. FGMs are heterogeneous materials, characterized by spatially variable microstructure, and thus spatially variable macroscopic properties, introduced to enhance material or structural performance. The spatially variable material properties make FGMs challenging to analyze. The review of the various techniques employed to analyze the thermodynamical response of FGMs reveals two distinct and fundamentally different computational strategies, called uncoupled macromechanical and coupled micromechanical approaches by some investigators. The uncoupled macromechanical approaches ignore the effect of microstructural gradation by employing specific spatial variations of material properties, which are either assumed or obtained by local homogenization, thereby resulting in erroneous results under certain circumstances. In contrast, the coupled approaches explicitly account for the micro-macrostructural interaction, albeit at a significantly higher computational cost. The higher-order theory for functionally graded materials (HOTFGM) developed by Aboudi et al. is representative of the coupled approach. However, despite its demonstrated utility in applications where micro-macrostructural coupling effects are important, the theory's full potential is yet to be realized because the original formulation of HOTFGM is computationally intensive. This, in turn, limits the size of problems that can be solved due to the large number of equations required to mimic realistic material microstructures. Therefore, a basis for an efficient reformulation of HOTFGM, referred to as user-friendly formulation, is developed herein, and subsequently employed in the construction of the efficient reformulation using the local/global conductivity matrix approach. In order to extend HOTFGM's range of applicability, spatially variable thermal conductivity capability at the local level is incorporated into the efficient reformulation. Analytical solutions to validate both the user-friendly and efficient reformulations am also developed. Volume discretization sensitivity and validation studies, as well as a practical application of the developed efficient reformulation are subsequently carried out. The presented results illustrate the accuracy and implementability of both the user-friendly formulation and the efficient reformulation of HOTFGM.
Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.
Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang
2017-06-17
Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.
Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs
Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang
2017-01-01
Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12–1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15–1.26 times the storage utilization efficiency compared with other schemes. PMID:28629135
Phase quality map based on local multi-unwrapped results for two-dimensional phase unwrapping.
Zhong, Heping; Tang, Jinsong; Zhang, Sen
2015-02-01
The efficiency of a phase unwrapping algorithm and the reliability of the corresponding unwrapped result are two key problems in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) or interferometric synthetic aperture sonar (InSAS) data. In this paper, a new phase quality map is designed and implemented in a graphic processing unit (GPU) environment, which greatly accelerates the unwrapping process of the quality-guided algorithm and enhances the correctness of the unwrapped result. In a local wrapped phase window, the center point is selected as the reference point, and then two unwrapped results are computed by integrating in two different simple ways. After the two local unwrapped results are computed, the total difference of the two unwrapped results is regarded as the phase quality value of the center point. In order to accelerate the computing process of the new proposed quality map, we have implemented it in a GPU environment. The wrapped phase data are first uploaded to the memory of a device, and then the kernel function is called in the device to compute the phase quality in parallel by blocks of threads. Unwrapping tests performed on the simulated and real InSAS data confirm the accuracy and efficiency of the proposed method.
Spectral difference Lanczos method for efficient time propagation in quantum control theory
NASA Astrophysics Data System (ADS)
Farnum, John D.; Mazziotti, David A.
2004-04-01
Spectral difference methods represent the real-space Hamiltonian of a quantum system as a banded matrix which possesses the accuracy of the discrete variable representation (DVR) and the efficiency of finite differences. When applied to time-dependent quantum mechanics, spectral differences enhance the efficiency of propagation methods for evolving the Schrödinger equation. We develop a spectral difference Lanczos method which is computationally more economical than the sinc-DVR Lanczos method, the split-operator technique, and even the fast-Fourier-Transform Lanczos method. Application of fast propagation is made to quantum control theory where chirped laser pulses are designed to dissociate both diatomic and polyatomic molecules. The specificity of the chirped laser fields is also tested as a possible method for molecular identification and discrimination.
Costa, M G S; Silva, Y F; Batista, P R
2018-03-14
Microbial cellulosic degradation by cellulases has become a complementary approach for biofuel production. However, its efficiency is hindered by the recalcitrance of cellulose fibres. In this context, computational protein design methods may offer an efficient way to obtain variants with improved enzymatic activity. Cel9A-68 is a cellulase from Thermobifida fusca that is still active at high temperatures. In a previous work, we described a collective bending motion, which governs the overall cellulase dynamics. This movement promotes the approximation of its CBM and CD structural domains (that are connected by a flexible linker). We have identified two residues (G460 and P461) located at the linker that act as a hinge point. Herein, we applied a new level of protein design, focusing on the modulation of this collective motion to obtain cellulase variants with enhanced functional dynamics. We probed whether specific linker mutations would affect Cel9A-68 dynamics through computational simulations. We assumed that P461G and G460+ (with an extra glycine) constructs would present enhanced interdomain motions, while the G460P mutant would be rigid. From our results, the P461G mutation resulted in a broader exploration of the conformational space, as confirmed by clustering and free energy analyses. The WT enzyme was the most rigid system. However, G460P and P460+ explored distinct conformational states described by opposite directions of low-frequency normal modes; they sampled preferentially closed and open conformations, respectively. Overall, we highlight two significant findings: (i) all mutants explored larger conformational spaces than the WT; (ii) the selection of distinct conformational populations was intimately associated with the mutation considered. Thus, the engineering of Cel9A-68 motions through linker mutations may constitute an efficient way to improve cellulase activity, facilitating the disruption of cellulose fibres.
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
NASA Astrophysics Data System (ADS)
Kumar, Nitin; Singh, Udaybir; Kumar, Anil; Bhattacharya, Ranajoy; Singh, T. P.; Sinha, A. K.
2013-02-01
The design of 120 GHz, 1 MW gyrotron for plasma fusion application is presented in this paper. The mode selection is carried out considering the aim of minimum mode competition, minimum cavity wall heating, etc. On the basis of the selected operating mode, the interaction cavity design and beam-wave interaction computation are carried out by using the PIC code. The design of triode type Magnetron Injection Gun (MIG) is also presented. Trajectory code EGUN, synthesis code MIGSYN and data analysis code MIGANS are used in the MIG designing. Further, the design of MIG is also validated by using the another trajectory code TRAK. The design results of beam dumping system (collector) and RF window are also presented. Depressed collector is designed to enhance the overall tube efficiency. The design study confirms >1 MW output power with tube efficiency around 50% (with collector efficiency).
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Loughman, James; Davison, Peter; Flitcroft, Ian
2007-11-01
Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
Computer Aided Battery Engineering Consortium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesaran, Ahmad
A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less
NASA Technical Reports Server (NTRS)
1992-01-01
The PER-Force robotic handcontroller provides a sense of touch or "feel" to an operator manipulating robots. The force simulation and wide range of motion greatly enhances the efficiency of robotic and computer operations. The handcontroller was developed for the Space Station by Cybernet Systems Corporation under a Small Business Innovation Research (SBIR) contract. Commercial applications include underwater use, underground excavations, research laboratories, hazardous waste handling and in manufacturing operations in which it is unsafe or impractical for humans to work.
Silicon photonics for neuromorphic information processing
NASA Astrophysics Data System (ADS)
Bienstman, Peter; Dambre, Joni; Katumba, Andrew; Freiberger, Matthias; Laporte, Floris; Lugnan, Alessio
2018-02-01
We present our latest results on silicon photonics neuromorphic information processing based a.o. on techniques like reservoir computing. We will discuss aspects like scalability, novel architectures for enhanced power efficiency, as well as all-optical readout. Additionally, we will touch upon new machine learning techniques to operate these integrated readouts. Finally, we will show how these systems can be used for high-speed low-power information processing for applications like recognition of biological cells.
Alloy Design Workbench-Surface Modeling Package Developed
NASA Technical Reports Server (NTRS)
Abel, Phillip B.; Noebe, Ronald D.; Bozzolo, Guillermo H.; Good, Brian S.; Daugherty, Elaine S.
2003-01-01
NASA Glenn Research Center's Computational Materials Group has integrated a graphical user interface with in-house-developed surface modeling capabilities, with the goal of using computationally efficient atomistic simulations to aid the development of advanced aerospace materials, through the modeling of alloy surfaces, surface alloys, and segregation. The software is also ideal for modeling nanomaterials, since surface and interfacial effects can dominate material behavior and properties at this level. Through the combination of an accurate atomistic surface modeling methodology and an efficient computational engine, it is now possible to directly model these types of surface phenomenon and metallic nanostructures without a supercomputer. Fulfilling a High Operating Temperature Propulsion Components (HOTPC) project level-I milestone, a graphical user interface was created for a suite of quantum approximate atomistic materials modeling Fortran programs developed at Glenn. The resulting "Alloy Design Workbench-Surface Modeling Package" (ADW-SMP) is the combination of proven quantum approximate Bozzolo-Ferrante-Smith (BFS) algorithms (refs. 1 and 2) with a productivity-enhancing graphical front end. Written in the portable, platform independent Java programming language, the graphical user interface calls on extensively tested Fortran programs running in the background for the detailed computational tasks. Designed to run on desktop computers, the package has been deployed on PC, Mac, and SGI computer systems. The graphical user interface integrates two modes of computational materials exploration. One mode uses Monte Carlo simulations to determine lowest energy equilibrium configurations. The second approach is an interactive "what if" comparison of atomic configuration energies, designed to provide real-time insight into the underlying drivers of alloying processes.
NASA Astrophysics Data System (ADS)
Di Dato, Mariaines; de Barros, Felipe P. J.; Fiori, Aldo; Bellin, Alberto
2018-03-01
Natural attenuation and in situ oxidation are commonly considered as low-cost alternatives to ex situ remediation. The efficiency of such remediation techniques is hindered by difficulties in obtaining good dilution and mixing of the contaminant, in particular if the plume deformation is physically constrained by an array of wells, which serves as a containment system. In that case, dilution may be enhanced by inducing an engineered sequence of injections and extractions from such pumping system, which also works as a hydraulic barrier. This way, the aquifer acts as a natural mixer, in a manner similar to the industrialized engineered mixers. Improving the efficiency of hydrogeological mixers is a challenging task, owing to the need to use a 3-D setup while relieving the computational burden. Analytical solutions, though approximated, are a suitable and efficient tool to seek the optimum solution among all possible flow configurations. Here we develop a novel physically based model to demonstrate how the combined spatiotemporal fluctuations of the water fluxes control solute trajectories and residence time distributions and therefore, the effectiveness of contaminant plume dilution and mixing. Our results show how external forcing configurations are capable of inducing distinct time-varying groundwater flow patterns which will yield different solute dilution rates.
Wan, Jianing; Zhu, Junda; Zhong, Ying; Liu, Haitao
2018-06-01
The electromagnetic enhancement by a metallic nanowire optical antenna on metallic substrate is investigated theoretically. By considering the excitation and multiple scattering of surface plasmon polaritons in the nanogap between the antenna and the substrate, we build up an intuitive and comprehensive model that provides semianalytical expressions for the electromagnetic field in the nanogap to achieve an understanding of the mechanism of electromagnetic enhancement. Our results show that antennas with short lengths that support the lowest order of resonance can achieve a high electric-field enhancement factor over a large range of incidence angles. Two phase-matching conditions are derived from the model for predicting the antenna lengths at resonance. Excitation of symmetric or antisymmetric localized surface plasmon resonance is further explained with the model. The model also shows superior computational efficiency compared to the full-wave numerical method when scanning the antenna length, the incidence angle, or the wavelength.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Wei, Feng; Hussain, Arif; Ali, Wahid; Sehee, Oh; Lee, Moonyong
2017-11-01
This research work unfolds a simple, safe, and environment-friendly energy efficient novel vortex tube-based natural gas liquefaction process (LNG). A vortex tube was introduced to the popular N2-expander liquefaction process to enhance the liquefaction efficiency. The process structure and condition were modified and optimized to take a potential advantage of the vortex tube on the natural gas liquefaction cycle. Two commercial simulators ANSYS® and Aspen HYSYS® were used to investigate the application of vortex tube in the refrigeration cycle of LNG process. The Computational fluid dynamics (CFD) model was used to simulate the vortex tube with nitrogen (N2) as a working fluid. Subsequently, the results of the CFD model were embedded in the Aspen HYSYS® to validate the proposed LNG liquefaction process. The proposed natural gas liquefaction process was optimized using the knowledge-based optimization (KBO) approach. The overall energy consumption was chosen as an objective function for optimization. The performance of the proposed liquefaction process was compared with the conventional N2-expander liquefaction process. The vortex tube-based LNG process showed a significant improvement of energy efficiency by 20% in comparison with the conventional N2-expander liquefaction process. This high energy efficiency was mainly due to the isentropic expansion of the vortex tube. It turned out that the high energy efficiency of vortex tube-based process is totally dependent on the refrigerant cold fraction, operating conditions as well as refrigerant cycle configurations.
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Automation to improve efficiency of field expedient injury prediction screening.
Teyhen, Deydre S; Shaffer, Scott W; Umlauf, Jon A; Akerman, Raymond J; Canada, John B; Butler, Robert J; Goffar, Stephen L; Walker, Michael J; Kiesel, Kyle B; Plisky, Phillip J
2012-07-01
Musculoskeletal injuries are a primary source of disability in the U.S. Military. Physical training and sports-related activities account for up to 90% of all injuries, and 80% of these injuries are considered overuse in nature. As a result, there is a need to develop an evidence-based musculoskeletal screen that can assist with injury prevention. The purpose of this study was to assess the capability of an automated system to improve the efficiency of field expedient tests that may help predict injury risk and provide corrective strategies for deficits identified. The field expedient tests include survey questions and measures of movement quality, balance, trunk stability, power, mobility, and foot structure and mobility. Data entry for these tests was automated using handheld computers, barcode scanning, and netbook computers. An automated algorithm for injury risk stratification and mitigation techniques was run on a server computer. Without automation support, subjects were assessed in 84.5 ± 9.1 minutes per subject compared with 66.8 ± 6.1 minutes per subject with automation and 47.1 ± 5.2 minutes per subject with automation and process improvement measures (p < 0.001). The average time to manually enter the data was 22.2 ± 7.4 minutes per subject. An additional 11.5 ± 2.5 minutes per subject was required to manually assign an intervention strategy. Automation of this injury prevention screening protocol using handheld devices and netbook computers allowed for real-time data entry and enhanced the efficiency of injury screening, risk stratification, and prescription of a risk mitigation strategy.
Polydopamine-coated gold nanostars for CT imaging and enhanced photothermal therapy of tumors
NASA Astrophysics Data System (ADS)
Li, Du; Shi, Xiangyang; Jin, Dayong
2016-12-01
The advancement of biocompatible nanoplatforms with dual functionalities of diagnosis and therapeutics is strongly demanded in biomedicine in recent years. In this work, we report the synthesis and characterization of polydopamine (pD)-coated gold nanostars (Au NSs) for computed tomography (CT) imaging and enhanced photothermal therapy (PTT) of tumors. Au NSs were firstly formed via a seed-mediated growth method and then stabilized with thiolated polyethyleneimine (PEI-SH), followed by deposition of pD on their surface. The formed pD-coated Au NSs (Au-PEI@pD NSs) were well characterized. We show that the Au-PEI@pD NSs are able to convert the absorbed near-infrared laser light into heat, and have strong X-ray attenuation property. Due to the co-existence of Au NSs and the pD, the light to heat conversion efficiency of the NSs can be significantly enhanced. These very interesting properties allow their uses as a powerful theranostic nanoplatform for efficient CT imaging and enhanced phtotothermal therapy of cancer cells in vitro and the xenografted tumor model in vivo. With the easy functionalization nature enabled by the coated pD shell, the developed pD-coated Au NSs may be developed as a versatile nanoplatform for targeted CT imaging and PTT of different types of cancer.
Gurkan-Cavusoglu, Evren; Avadhani, Sriya; Liu, Lili; Kinsella, Timothy J; Loparo, Kenneth A
2013-04-01
Base excision repair (BER) is a major DNA repair pathway involved in the processing of exogenous non-bulky base damages from certain classes of cancer chemotherapy drugs as well as ionising radiation (IR). Methoxyamine (MX) is a small molecule chemical inhibitor of BER that is shown to enhance chemotherapy and/or IR cytotoxicity in human cancers. In this study, the authors have analysed the inhibitory effect of MX on the BER pathway kinetics using a computational model of the repair pathway. The inhibitory effect of MX depends on the BER efficiency. The authors have generated variable efficiency groups using different sets of protein concentrations generated by Latin hypercube sampling, and they have clustered simulation results into high, medium and low efficiency repair groups. From analysis of the inhibitory effect of MX on each of the three groups, it is found that the inhibition is most effective for high efficiency BER, and least effective for low efficiency repair.
NASA Technical Reports Server (NTRS)
Taylor, R. C.; Hettrick, M. C.; Malina, R. F.
1983-01-01
High quantum efficiency and two-dimensional imaging capabilities make the microchannel plate (MCP) a suitable detector for a sky survey instrument. The Extreme Ultraviolet Explorer satellite, to be launched in 1987, will use MCP detectors. A feature which limits MCP efficiency is related to the walls of individual channels. The walls are of finite thickness and thus form an interchannel web. Under normal circumstances, this web does not contribute to the detector's quantum efficiency. Panitz and Foesch (1976) have found that in the case of a bombardment with ions, electrons were ejected from the electrode material coating the web. By applying a small electric field, the electrons were returned to the MCP surface where they were detected. The present investigation is concerned with the enhancement of quantum efficiencies in the case of extreme UV wavelengths. Attention is given to a model and a computer simulation which quantitatively reproduce the experimental results.
Solar Proton Transport Within an ICRU Sphere Surrounded by a Complex Shield: Ray-trace Geometry
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Wilson, John W.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
A computationally efficient 3DHZETRN code with enhanced neutron and light ion (Z is less than or equal to 2) propagation was recently developed for complex, inhomogeneous shield geometry described by combinatorial objects. Comparisons were made between 3DHZETRN results and Monte Carlo (MC) simulations at locations within the combinatorial geometry, and it was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in ray-trace geometry. This latest extension enables the code to be used within current engineering design practices utilizing fully detailed vehicle and habitat geometries. Through convergence testing, it is shown that fidelity in an actual shield geometry can be maintained in the discrete ray-trace description by systematically increasing the number of discrete rays used. It is also shown that this fidelity is carried into transport procedures and resulting exposure quantities without sacrificing computational efficiency.
Solar proton exposure of an ICRU sphere within a complex structure part II: Ray-trace geometry.
Slaba, Tony C; Wilson, John W; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A
2016-06-01
A computationally efficient 3DHZETRN code with enhanced neutron and light ion (Z ≤ 2) propagation was recently developed for complex, inhomogeneous shield geometry described by combinatorial objects. Comparisons were made between 3DHZETRN results and Monte Carlo (MC) simulations at locations within the combinatorial geometry, and it was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in ray-trace geometry. This latest extension enables the code to be used within current engineering design practices utilizing fully detailed vehicle and habitat geometries. Through convergence testing, it is shown that fidelity in an actual shield geometry can be maintained in the discrete ray-trace description by systematically increasing the number of discrete rays used. It is also shown that this fidelity is carried into transport procedures and resulting exposure quantities without sacrificing computational efficiency. Published by Elsevier Ltd.
Retif, Paul; Reinhard, Aurélie; Paquot, Héna; Jouan-Hureaux, Valérie; Chateau, Alicia; Sancey, Lucie; Barberi-Heyob, Muriel; Pinel, Sophie; Bastogne, Thierry
2016-01-01
This article addresses the in silico–in vitro prediction issue of organometallic nanoparticles (NPs)-based radiosensitization enhancement. The goal was to carry out computational experiments to quickly identify efficient nanostructures and then to preferentially select the most promising ones for the subsequent in vivo studies. To this aim, this interdisciplinary article introduces a new theoretical Monte Carlo computational ranking method and tests it using 3 different organometallic NPs in terms of size and composition. While the ranking predicted in a classical theoretical scenario did not fit the reference results at all, in contrast, we showed for the first time how our accelerated in silico virtual screening method, based on basic in vitro experimental data (which takes into account the NPs cell biodistribution), was able to predict a relevant ranking in accordance with in vitro clonogenic efficiency. This corroborates the pertinence of such a prior ranking method that could speed up the preclinical development of NPs in radiation therapy. PMID:27920524
Retif, Paul; Reinhard, Aurélie; Paquot, Héna; Jouan-Hureaux, Valérie; Chateau, Alicia; Sancey, Lucie; Barberi-Heyob, Muriel; Pinel, Sophie; Bastogne, Thierry
This article addresses the in silico-in vitro prediction issue of organometallic nanoparticles (NPs)-based radiosensitization enhancement. The goal was to carry out computational experiments to quickly identify efficient nanostructures and then to preferentially select the most promising ones for the subsequent in vivo studies. To this aim, this interdisciplinary article introduces a new theoretical Monte Carlo computational ranking method and tests it using 3 different organometallic NPs in terms of size and composition. While the ranking predicted in a classical theoretical scenario did not fit the reference results at all, in contrast, we showed for the first time how our accelerated in silico virtual screening method, based on basic in vitro experimental data (which takes into account the NPs cell biodistribution), was able to predict a relevant ranking in accordance with in vitro clonogenic efficiency. This corroborates the pertinence of such a prior ranking method that could speed up the preclinical development of NPs in radiation therapy.
Vogel, Dayton J.; Kryjevski, Andrei; Inerbaev, Talgat; ...
2017-03-21
Methylammonium lead iodide perovskite (MAPbI 3) is a promising material for photovoltaic devices. A modification of MAPbI 3 into confined nanostructures is expected to further increase efficiency of solar energy conversion. Photoexcited dynamic processes in a MAPbI3 quantum dot (QD) have been modeled by many-body perturbation theory and nonadiabatic dynamics. A photoexcitation is followed by either exciton cooling (EC), its radiative (RR) or nonradiative recombination (NRR), or multiexciton generation (MEG) processes. Computed times of these processes fall in the order of MEG < EC < RR < NRR, where MEG is on the order of a few femtoseconds, EC ismore » in the picosecond range, while RR and NRR are on the order of nanoseconds. Computed time scales indicate which electronic transition pathways can contribute to increase in charge collection efficiency. Simulated mechanisms of relaxation and their rates show that quantum confinement promotes MEG in MAPbI 3 QDs.« less
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
Computational Relativistic Astrophysics Using the Flowfield-Dependent Variation Theory
NASA Technical Reports Server (NTRS)
Richardson, G. A.; Chung, T. J.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Theoretical models, observations and measurements have preoccupied astrophysicists for many centuries. Only in recent years, has the theory of relativity as applied to astrophysical flows met the challenges of how the governing equations can be solved numerically with accuracy and efficiency. Even without the effects of relativity, the physics of magnetohydrodynamic flow instability, turbulence, radiation, and enhanced transport in accretion disks has not been completely resolved. Relativistic effects become pronounced in such cases as jet formation from black hole magnetized accretion disks and also in the study of Gamma-Ray bursts (GRB). Thus, our concern in this paper is to reexamine existing numerical simulation tools as to the accuracy and efficiency of computations and introduce a new approach known as the flowfield-dependent variation (FDV) method. The main feature of the FDV method consists of accommodating discontinuities of shock waves and high gradients of flow variables such as occur in turbulence and unstable motions. In this paper, the physics involved in the solution of relativistic hydrodynamics and solution strategies of the FDV theory are elaborated. The general relativistic astrophysical flow and shock solver (GRAFSS) is introduced, and some simple example problems for Computational Relativistic Astrophysics (CRA) are demonstrated.
NASA Astrophysics Data System (ADS)
Moeferdt, Matthias; Kiel, Thomas; Sproll, Tobias; Intravaia, Francesco; Busch, Kurt
2018-02-01
A combined analytical and numerical study of the modes in two distinct plasmonic nanowire systems is presented. The computations are based on a discontinuous Galerkin time-domain approach, and a fully nonlinear and nonlocal hydrodynamic Drude model for the metal is utilized. In the linear regime, these computations demonstrate the strong influence of nonlocality on the field distributions as well as on the scattering and absorption spectra. Based on these results, second-harmonic-generation efficiencies are computed over a frequency range that covers all relevant modes of the linear spectra. In order to interpret the physical mechanisms that lead to corresponding field distributions, the associated linear quasielectrostatic problem is solved analytically via conformal transformation techniques. This provides an intuitive classification of the linear excitations of the systems that is then applied to the full Maxwell case. Based on this classification, group theory facilitates the determination of the selection rules for the efficient excitation of modes in both the linear and nonlinear regimes. This leads to significantly enhanced second-harmonic generation via judiciously exploiting the system symmetries. These results regarding the mode structure and second-harmonic generation are of direct relevance to other nanoantenna systems.
Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam
2016-06-01
The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
GPU-Accelerated Voxelwise Hepatic Perfusion Quantification
Wang, H; Cao, Y
2012-01-01
Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks
Hammad, Karim; El Bakly, Ahmed M.
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.
Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.
Aprà, E; Kowalski, K
2016-03-08
In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.
Yan, Xin-Zhong
2011-07-01
The discrete Fourier transform is approximated by summing over part of the terms with corresponding weights. The approximation reduces significantly the requirement for computer memory storage and enhances the numerical computation efficiency with several orders without losing accuracy. As an example, we apply the algorithm to study the three-dimensional interacting electron gas under the renormalized-ring-diagram approximation where the Green's function needs to be self-consistently solved. We present the results for the chemical potential, compressibility, free energy, entropy, and specific heat of the system. The ground-state energy obtained by the present calculation is compared with the existing results of Monte Carlo simulation and random-phase approximation.
Extensions and improvements on XTRAN3S
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.M. McEligot; K. G. Condie; G. E. McCreery
2005-10-01
Background: The ultimate goal of the study is the improvement of predictive methods for safety analyses and design of Generation IV reactor systems such as supercritical water reactors (SCWR) for higher efficiency, improved performance and operation, design simplification, enhanced safety and reduced waste and cost. The objective of this Korean / US / laboratory / university collaboration of coupled fundamental computational and experimental studies is to develop the supporting knowledge needed for improved predictive techniques for use in the technology development of Generation IV reactor concepts and their passive safety systems. The present study emphasizes SCWR concepts in the Generationmore » IV program.« less
NASA Astrophysics Data System (ADS)
Burnett, W.
2016-12-01
The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
An Efficient ERP-Based Brain-Computer Interface Using Random Set Presentation and Face Familiarity
Müller, Klaus-Robert; Lee, Seong-Whan
2014-01-01
Event-related potential (ERP)-based P300 spellers are commonly used in the field of brain-computer interfaces as an alternative channel of communication for people with severe neuro-muscular diseases. This study introduces a novel P300 based brain-computer interface (BCI) stimulus paradigm using a random set presentation pattern and exploiting the effects of face familiarity. The effect of face familiarity is widely studied in the cognitive neurosciences and has recently been addressed for the purpose of BCI. In this study we compare P300-based BCI performances of a conventional row-column (RC)-based paradigm with our approach that combines a random set presentation paradigm with (non-) self-face stimuli. Our experimental results indicate stronger deflections of the ERPs in response to face stimuli, which are further enhanced when using the self-face images, and thereby improving P300-based spelling performance. This lead to a significant reduction of stimulus sequences required for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-based BCI setup. PMID:25384045
Use of a wireless local area network in an orthodontic clinic.
Mupparapu, Muralidhar; Binder, Robert E; Cummins, John M
2005-06-01
Radiographic images and other patient records, including medical histories, demographics, and health insurance information, can now be stored digitally and accessed via patient management programs. However, digital image acquisition and diagnosis and treatment planning are independent tasks, and each is time consuming, especially when performed at different computer workstations. Networking or linking the computers in an office enhances access to imaging and treatment planning tools. Access can be further enhanced if the entire network is wireless. Thanks to wireless technology, stand-alone, desk-bound personal computers have been replaced with mobile, hand-held devices that can communicate with each other and the rest of the world via the Internet. As with any emerging technology, some issues should be kept in mind when adapting to the wireless environment. Foremost is network security. Second is the choice of mobile hardware devices that are used by the orthodontist, office staff, and patients. This article details the standards and choices in wireless technology that can be implemented in an orthodontic clinic and suggests how to select suitable mobile hardware for accessing or adding data to a preexisting network. The network security protocols discussed comply with HIPAA regulations and boost the efficiency of a modern orthodontic clinic.
An efficient ERP-based brain-computer interface using random set presentation and face familiarity.
Yeom, Seul-Ki; Fazli, Siamac; Müller, Klaus-Robert; Lee, Seong-Whan
2014-01-01
Event-related potential (ERP)-based P300 spellers are commonly used in the field of brain-computer interfaces as an alternative channel of communication for people with severe neuro-muscular diseases. This study introduces a novel P300 based brain-computer interface (BCI) stimulus paradigm using a random set presentation pattern and exploiting the effects of face familiarity. The effect of face familiarity is widely studied in the cognitive neurosciences and has recently been addressed for the purpose of BCI. In this study we compare P300-based BCI performances of a conventional row-column (RC)-based paradigm with our approach that combines a random set presentation paradigm with (non-) self-face stimuli. Our experimental results indicate stronger deflections of the ERPs in response to face stimuli, which are further enhanced when using the self-face images, and thereby improving P300-based spelling performance. This lead to a significant reduction of stimulus sequences required for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-based BCI setup.
Efficient Residue to Binary Conversion Based on a Modified Flexible Moduli Set
NASA Astrophysics Data System (ADS)
Molahosseini, Amir Sabbagh
2011-09-01
The Residue Number System (RNS) is a non-weighted number system which can perform addition (subtraction) and multiplication on residues without carry-propagation; resulting in high-speed hardware implementations of computation systems. The problem of converting residue numbers to equivalent binary weighted form has been attracted a lot of research for many years. Recently, some researchers proposed using flexible moduli sets instead of previous traditional moduli sets to enhance the performance of residue to binary converters. This paper introduces the modified flexible moduli set {22p+k. 22p+1, 2p+1, 2p-1} which is achieved from the flexible set {2p+k, 22p+1, 2p+1, 2p-1} by enhancing modulo 2p+k. Next, new Chinese remainder theorem-1 is used to design simple and efficient residue to binary converter for this modified set with better performance than the converter of the moduli set {2p+k, 22p+1, 2p+1, 2p-1}.
1983-12-01
while at the same time improving its operational efficiency. Through their integration and use, System Program Managers have a comprehensive analytical... systems . The NRLA program is hosted on the CREATE Operating System and contains approxiamately 5500 lines of computer code. It consists of a main...associated with C alternative maintenance plans. As the technological complexity of weapons systems has increased new and innovative logisitcal support
An efficient annealing in Boltzmann machine in Hopfield neural network
NASA Astrophysics Data System (ADS)
Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz
2012-09-01
This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.
Capacity planning in a transitional economy: What issues? Which models?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubayi, V.; Leigh, R.W.; Bright, R.N.
1996-03-01
This paper is devoted to an exploration of the important issues facing the Russian power generation system and its evolution in the foreseeable future and the kinds of modeling approaches that capture those issues. These issues include, for example, (1) trade-offs between investments in upgrading and refurbishment of existing thermal (fossil-fired) capacity and safety enhancements in existing nuclear capacity versus investment in new capacity, (2) trade-offs between investment in completing unfinished (under construction) projects based on their original design versus investment in new capacity with improved design, (3) incorporation of demand-side management options (investments in enhancing end-use efficiency, for example)more » within the planning framework, (4) consideration of the spatial dimensions of system planning including investments in upgrading electric transmission networks or fuel shipment networks and incorporating hydroelectric generation, (5) incorporation of environmental constraints and (6) assessment of uncertainty and evaluation of downside risk. Models for exploring these issues include low power shutdown (LPS) which are computationally very efficient, though approximate, and can be used to perform extensive sensitivity analyses to more complex models which can provide more detailed answers but are computationally cumbersome and can only deal with limited issues. The paper discusses which models can usefully treat a wide range of issues within the priorities facing decision makers in the Russian power sector and integrate the results with investment decisions in the wider economy.« less
NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
Three-dimensional propagation in near-field tomographic X-ray phase retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhlandt, Aike, E-mail: aruhlan@gwdg.de; Salditt, Tim
An extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions is presented, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. This paper presents an extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. The approach is based on a novel three-dimensional propagator and is derived for the case of optically weak objects. It can be easily implemented in current phase retrieval architectures, is computationally efficient and reduces the need for restrictive prior assumptions, resultingmore » in superior reconstruction quality.« less
MetAlign 3.0: performance enhancement by efficient use of advances in computer hardware.
Lommen, Arjen; Kools, Harrie J
2012-08-01
A new, multi-threaded version of the GC-MS and LC-MS data processing software, metAlign, has been developed which is able to utilize multiple cores on one PC. This new version was tested using three different multi-core PCs with different operating systems. The performance of noise reduction, baseline correction and peak-picking was 8-19 fold faster compared to the previous version on a single core machine from 2008. The alignment was 5-10 fold faster. Factors influencing the performance enhancement are discussed. Our observations show that performance scales with the increase in processor core numbers we currently see in consumer PC hardware development.
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Mickan, Sharon; Tilson, Julie K; Atherton, Helen; Roberts, Nia Wyn; Heneghan, Carl
2013-10-28
Handheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice. A scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work. A detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer. Five systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency. This scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing future research.
Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis
2013-01-01
We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
LightWAVE: Waveform and Annotation Viewing and Editing in a Web Browser.
Moody, George B
2013-09-01
This paper describes LightWAVE, recently-developed open-source software for viewing ECGs and other physiologic waveforms and associated annotations (event markers). It supports efficient interactive creation and modification of annotations, capabilities that are essential for building new collections of physiologic signals and time series for research. LightWAVE is constructed of components that interact in simple ways, making it straightforward to enhance or replace any of them. The back end (server) is a common gateway interface (CGI) application written in C for speed and efficiency. It retrieves data from its data repository (PhysioNet's open-access PhysioBank archives by default, or any set of files or web pages structured as in PhysioBank) and delivers them in response to requests generated by the front end. The front end (client) is a web application written in JavaScript. It runs within any modern web browser and does not require installation on the user's computer, tablet, or phone. Finally, LightWAVE's scribe is a tiny CGI application written in Perl, which records the user's edits in annotation files. LightWAVE's data repository, back end, and front end can be located on the same computer or on separate computers. The data repository may be split across multiple computers. For compatibility with the standard browser security model, the front end and the scribe must be loaded from the same domain.
Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications.
Karyotis, Vasileios; Tsitseklis, Konstantinos; Sotiropoulos, Konstantinos; Papavassiliou, Symeon
2018-04-15
In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan-Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing.
Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications
Sotiropoulos, Konstantinos
2018-01-01
In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan–Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing. PMID:29662043
Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad
2018-04-01
Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.
Gyrokinetic particle-in-cell optimization on emerging multi- and manycore platforms
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.; ...
2011-03-02
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki
2017-12-01
The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.
Liu, Yanlan; Ji, Xiaoyuan; Liu, Jianhua; Tong, Winnie W L; Askhatova, Diana; Shi, Jinjun
2017-10-19
Near-infrared (NIR)-absorbing metal-based nanomaterials have shown tremendous potential for cancer therapy, given their facile and controllable synthesis, efficient photothermal conversion, capability of spatiotemporal-controlled drug delivery, and intrinsic imaging function. Tantalum (Ta) is among the most biocompatible metals and arouses negligible adverse biological responses in either oxidized or reduced forms, and thus Ta-derived nanomaterials represent promising candidates for biomedical applications. However, Ta-based nanomaterials by themselves have not been explored for NIR-mediated photothermal ablation therapy. In this work, we report an innovative Ta-based multifunctional nanoplatform composed of biocompatible tantalum sulfide (TaS 2 ) nanosheets (NSs) for simultaneous NIR hyperthermia, drug delivery, and computed tomography (CT) imaging. The TaS 2 NSs exhibit multiple unique features including (i) efficient NIR light-to-heat conversion with a high photothermal conversion efficiency of 39%. (ii) high drug loading (177% by weight), (iii) controlled drug release triggered by NIR light and moderate acidic pH, (iv) high tumor accumulation via heat-enhanced tumor vascular permeability, (v) complete tumor ablation and negligible side effects, and (vi) comparable CT imaging contrast efficiency to the widely clinically used agent iobitridol. We expect that this multifunctional NS platform can serve as a promising candidate for imaging-guided cancer therapy and selection of cancer patients with high tumor accumulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iacono, Michael J.
The objective of this research has been to evaluate and implement enhancements to the computational performance of the RRTMG radiative transfer option in the Advanced Research version of the Weather Research and Forecasting (WRF) model. Efficiency is as essential as accuracy for effective numerical weather prediction, and radiative transfer is a relatively time-consuming component of dynamical models, taking up to 30-50 percent of the total model simulation time. To address this concern, this research has implemented and tested a version of RRTMG that utilizes graphics processing unit (GPU) technology (hereinafter RRTMGPU) to greatly improve its computational performance; thereby permitting eithermore » more frequent simulation of radiative effects or other model enhancements. During the early stages of this project the development of RRTMGPU was completed at AER under separate NASA funding to accelerate the code for use in the Goddard Space Flight Center (GSFC) Goddard Earth Observing System GEOS-5 global model. It should be noted that this final report describes results related to the funded portion of the originally proposed work concerning the acceleration of RRTMG with GPUs in WRF. As a k-distribution model, RRTMG is especially well suited to this modification due to its relatively large internal pseudo-spectral (g-point) dimension that, when combined with the horizontal grid vector in the dynamical model, can take great advantage of the GPU capability. Thorough testing under several model configurations has been performed to ensure that RRTMGPU improves WRF model run time while having no significant impact on calculated radiative fluxes and heating rates or on dynamical model fields relative to the RRTMG radiation. The RRTMGPU codes have been provided to NCAR for possible application to the next public release of the WRF forecast model.« less
Efficient Fingercode Classification
NASA Astrophysics Data System (ADS)
Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang
In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wenkun; Zhang, Hanming; Li, Lei
2016-08-15
X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem,more » we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.« less
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin
2016-08-01
X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less
Enhanced Conformational Sampling of N-Glycans in Solution with Replica State Exchange Metadynamics.
Galvelis, Raimondas; Re, Suyong; Sugita, Yuji
2017-05-09
Molecular dynamics (MD) simulation of a N-glycan in solution is challenging because of high-energy barriers of the glycosidic linkages, functional group rotational barriers, and numerous intra- and intermolecular hydrogen bonds. In this study, we apply different enhanced conformational sampling approaches, namely, metadynamics (MTD), the replica-exchange MD (REMD), and the recently proposed replica state exchange MTD (RSE-MTD), to a N-glycan in solution and compare the conformational sampling efficiencies of the approaches. MTD helps to cross the high-energy barrier along the ω angle by utilizing a bias potential, but it cannot enhance sampling of the other degrees of freedom. REMD ensures moderate-energy barrier crossings by exchanging temperatures between replicas, while it hardly crosses the barriers along ω. In contrast, RSE-MTD succeeds to cross the high-energy barrier along ω as well as to enhance sampling of the other degrees of freedom. We tested two RSE-MTD schemes: in one scheme, 64 replicas were simulated with the bias potential along ω at different temperatures, while simulations of four replicas were performed with the bias potentials for different CVs at 300 K. In both schemes, one unbiased replica at 300 K was included to compute conformational properties of the glycan. The conformational sampling of the former is better than the other enhanced sampling methods, while the latter shows reasonable performance without spending large computational resources. The latter scheme is likely to be useful when a N-glycan-attached protein is simulated.
Benchmark Lisp And Ada Programs
NASA Technical Reports Server (NTRS)
Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.
1992-01-01
Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.
DAM package version 7807: Software fixes and enhancements
NASA Technical Reports Server (NTRS)
Schlosser, E.
1979-01-01
The Detection and Mapping package is an integrated set of manual procedures, computer programs, and graphic devices designed for efficient production of precisely registered, formatted, and interpreted maps from digital LANDSAT multispectral scanner data. This report documents changes to the DAM package in support of its use by the Corps of Engineers for inventorying impounded surface water. Although these changes are presented in terms of their application to detecting and mapping surface water, they are equally relevant to other land surface materials.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Sevink, G J A; Schmid, F; Kawakatsu, T; Milano, G
2017-02-22
We have extended an existing hybrid MD-SCF simulation technique that employs a coarsening step to enhance the computational efficiency of evaluating non-bonded particle interactions. This technique is conceptually equivalent to the single chain in mean-field (SCMF) method in polymer physics, in the sense that non-bonded interactions are derived from the non-ideal chemical potential in self-consistent field (SCF) theory, after a particle-to-field projection. In contrast to SCMF, however, MD-SCF evolves particle coordinates by the usual Newton's equation of motion. Since collisions are seriously affected by the softening of non-bonded interactions that originates from their evaluation at the coarser continuum level, we have devised a way to reinsert the effect of collisions on the structural evolution. Merging MD-SCF with multi-particle collision dynamics (MPCD), we mimic particle collisions at the level of computational cells and at the same time properly account for the momentum transfer that is important for a realistic system evolution. The resulting hybrid MD-SCF/MPCD method was validated for a particular coarse-grained model of phospholipids in aqueous solution, against reference full-particle simulations and the original MD-SCF model. We additionally implemented and tested an alternative and more isotropic finite difference gradient. Our results show that efficiency is improved by merging MD-SCF with MPCD, as properly accounting for hydrodynamic interactions considerably speeds up the phase separation dynamics, with negligible additional computational costs compared to efficient MD-SCF. This new method enables realistic simulations of large-scale systems that are needed to investigate the applications of self-assembled structures of lipids in nanotechnologies.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
NASA Astrophysics Data System (ADS)
Cho, Baek Hwan; Chang, Chuho; Lee, Jong-Ha; Ko, Eun Young; Seong, Yeong Kyeong; Woo, Kyoung-Gu
2013-02-01
The existence of microcalcifications (MCs) is an important marker of malignancy in breast cancer. In spite of the benefits in mass detection for dense breasts, ultrasonography is believed that it might not reliably detect MCs. For computer aided diagnosis systems, however, accurate detection of MCs has the possibility of improving the performance in both Breast Imaging-Reporting and Data System (BI-RADS) lexicon description for calcifications and malignancy classification. We propose a new efficient and effective method for MC detection using image enhancement and threshold adjacency statistics (TAS). The main idea of TAS is to threshold an image and to count the number of white pixels with a given number of adjacent white pixels. Our contribution is to adopt TAS features and apply image enhancement to facilitate MC detection in ultrasound images. We employed fuzzy logic, tophat filter, and texture filter to enhance images for MCs. Using a total of 591 images, the classification accuracy of the proposed method in MC detection showed 82.75%, which is comparable to that of Haralick texture features (81.38%). When combined, the performance was as high as 85.11%. In addition, our method also showed the ability in mass classification when combined with existing features. In conclusion, the proposed method exploiting image enhancement and TAS features has the potential to deal with MC detection in ultrasound images efficiently and extend to the real-time localization and visualization of MCs.
Using 3D computer simulations to enhance ophthalmic training.
Glittenberg, C; Binder, S
2006-01-01
To develop more effective methods of demonstrating and teaching complex topics in ophthalmology with the use of computer aided three-dimensional (3D) animation and interactive multimedia technologies. We created 3D animations and interactive computer programmes demonstrating the neuroophthalmological nature of the oculomotor system, including the anatomy, physiology and pathophysiology of the extra-ocular eye muscles and the oculomotor cranial nerves, as well as pupillary symptoms of neurological diseases. At the University of Vienna we compared their teaching effectiveness to conventional teaching methods in a comparative study involving 100 medical students, a multiple choice exam and a survey. The comparative study showed that our students achieved significantly better test results (80%) than the control group (63%) (diff. = 17 +/- 5%, p = 0.004). The survey showed a positive reaction to the software and a strong preference to have more subjects and techniques demonstrated in this fashion. Three-dimensional computer animation technology can significantly increase the quality and efficiency of the education and demonstration of complex topics in ophthalmology.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Feasibility of Homomorphic Encryption for Sharing I2B2 Aggregate-Level Data in the Cloud
Raisaro, Jean Louis; Klann, Jeffrey G; Wagholikar, Kavishwar B; Estiri, Hossein; Hubaux, Jean-Pierre; Murphy, Shawn N
2018-01-01
The biomedical community is lagging in the adoption of cloud computing for the management of medical data. The primary obstacles are concerns about privacy and security. In this paper, we explore the feasibility of using advanced privacy-enhancing technologies in order to enable the sharing of sensitive clinical data in a public cloud. Our goal is to facilitate sharing of clinical data in the cloud by minimizing the risk of unintended leakage of sensitive clinical information. In particular, we focus on homomorphic encryption, a specific type of encryption that offers the ability to run computation on the data while the data remains encrypted. This paper demonstrates that homomorphic encryption can be used efficiently to compute aggregating queries on the ciphertexts, along with providing end-to-end confidentiality of aggregate-level data from the i2b2 data model. PMID:29888067
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Khavrutskii, Ilja V; Wallqvist, Anders
2010-11-09
This paper introduces an efficient single-topology variant of Thermodynamic Integration (TI) for computing relative transformation free energies in a series of molecules with respect to a single reference state. The presented TI variant that we refer to as Single-Reference TI (SR-TI) combines well-established molecular simulation methodologies into a practical computational tool. Augmented with Hamiltonian Replica Exchange (HREX), the SR-TI variant can deliver enhanced sampling in select degrees of freedom. The utility of the SR-TI variant is demonstrated in calculations of relative solvation free energies for a series of benzene derivatives with increasing complexity. Noteworthy, the SR-TI variant with the HREX option provides converged results in a challenging case of an amide molecule with a high (13-15 kcal/mol) barrier for internal cis/trans interconversion using simulation times of only 1 to 4 ns.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
Increased Diels-Alderase activity through backbone remodeling guided by Foldit players.
Eiben, Christopher B; Siegel, Justin B; Bale, Jacob B; Cooper, Seth; Khatib, Firas; Shen, Betty W; Players, Foldit; Stoddard, Barry L; Popovic, Zoran; Baker, David
2012-01-22
Computational enzyme design holds promise for the production of renewable fuels, drugs and chemicals. De novo enzyme design has generated catalysts for several reactions, but with lower catalytic efficiencies than naturally occurring enzymes. Here we report the use of game-driven crowdsourcing to enhance the activity of a computationally designed enzyme through the functional remodeling of its structure. Players of the online game Foldit were challenged to remodel the backbone of a computationally designed bimolecular Diels-Alderase to enable additional interactions with substrates. Several iterations of design and characterization generated a 24-residue helix-turn-helix motif, including a 13-residue insertion, that increased enzyme activity >18-fold. X-ray crystallography showed that the large insertion adopts a helix-turn-helix structure positioned as in the Foldit model. These results demonstrate that human creativity can extend beyond the macroscopic challenges encountered in everyday life to molecular-scale design problems.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
NASA Astrophysics Data System (ADS)
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Feasibility of Homomorphic Encryption for Sharing I2B2 Aggregate-Level Data in the Cloud.
Raisaro, Jean Louis; Klann, Jeffrey G; Wagholikar, Kavishwar B; Estiri, Hossein; Hubaux, Jean-Pierre; Murphy, Shawn N
2018-01-01
The biomedical community is lagging in the adoption of cloud computing for the management of medical data. The primary obstacles are concerns about privacy and security. In this paper, we explore the feasibility of using advanced privacy-enhancing technologies in order to enable the sharing of sensitive clinical data in a public cloud. Our goal is to facilitate sharing of clinical data in the cloud by minimizing the risk of unintended leakage of sensitive clinical information. In particular, we focus on homomorphic encryption, a specific type of encryption that offers the ability to run computation on the data while the data remains encrypted. This paper demonstrates that homomorphic encryption can be used efficiently to compute aggregating queries on the ciphertexts, along with providing end-to-end confidentiality of aggregate-level data from the i2b2 data model.
Statistical Methodologies to Integrate Experimental and Computational Research
NASA Technical Reports Server (NTRS)
Parker, P. A.; Johnson, R. T.; Montgomery, D. C.
2008-01-01
Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.
NASA Astrophysics Data System (ADS)
Sizyuk, V.; Sizyuk, T.; Hassanein, A.; Johnson, K.
2018-01-01
We have developed comprehensive integrated models for detailed simulation of laser-produced plasma (LPP) and laser/target interaction, with potential recycling of the escaping laser and out-of-band plasma radiation. Recycling, i.e., returning the escaping laser and plasma radiation to the extreme ultraviolet (EUV) generation region using retroreflective mirrors, has the potential of increasing the EUV conversion efficiency (CE) by up to 60% according to our simulations. This would result in significantly reduced power consumption and/or increased EUV output. Based on our recently developed models, our High Energy Interaction with General Heterogeneous Target Systems (HEIGHTS) computer simulation package was upgraded for LPP devices to include various radiation recycling regimes and to estimate the potential CE enhancement. The upgraded HEIGHTS was used to study recycling of both laser and plasma-generated radiation and to predict possible gains in conversion efficiency compared to no-recycling LPP devices when using droplets of tin target. We considered three versions of the LPP system including a single CO2 laser, a single Nd:YAG laser, and a dual-pulse device combining both laser systems. The gains in generating EUV energy were predicted and compared for these systems. Overall, laser and radiation energy recycling showed the potential for significant enhancement in source efficiency of up to 60% for the dual-pulse system. Significantly higher CE gains might be possible with optimization of the pre-pulse and main pulse parameters and source size.
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.
Algorithmic Mechanism Design of Evolutionary Computation
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
NASA Technical Reports Server (NTRS)
Hou, Gene
1998-01-01
Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.
Ghosh, Arindam; Lee, Jae-Won; Cho, Ho-Shin
2013-01-01
Due to its efficiency, reliability and better channel and resource utilization, cooperative transmission technologies have been attractive options in underwater as well as terrestrial sensor networks. Their performance can be further improved if merged with forward error correction (FEC) techniques. In this paper, we propose and analyze a retransmission protocol named Cooperative-Hybrid Automatic Repeat reQuest (C-HARQ) for underwater acoustic sensor networks, which exploits both the reliability of cooperative ARQ (CARQ) and the efficiency of incremental redundancy-hybrid ARQ (IR-HARQ) using rate-compatible punctured convolution (RCPC) codes. Extensive Monte Carlo simulations are performed to investigate the performance of the protocol, in terms of both throughput and energy efficiency. The results clearly reveal the enhancement in performance achieved by the C-HARQ protocol, which outperforms both CARQ and conventional stop and wait ARQ (S&W ARQ). Further, using computer simulations, optimum values of various network parameters are estimated so as to extract the best performance out of the C-HARQ protocol. PMID:24217359
Automating tasks in protein structure determination with the clipper python module
McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M.; Hoh, Soon Wen; Jenkins, Huw T.; Dodson, Eleanor
2017-01-01
Abstract Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine‐independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo‐microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. PMID:28901669
Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, R.
1992-01-01
The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.
Analysis, calculation and utilization of the k-balance attribute in interdependent networks
NASA Astrophysics Data System (ADS)
Liu, Zheng; Li, Qing; Wang, Dan; Xu, Mingwei
2018-05-01
Interdependent networks, where two networks depend on each other, are becoming more and more significant in modern systems. From previous work, it can be concluded that interdependent networks are more vulnerable than a single network. The robustness in interdependent networks deserves special attention. In this paper, we propose a metric of robustness from a new perspective-the balance. First, we define the balance-coefficient of the interdependent system. Based on precise analysis and derivation, we prove some significant theories and provide an efficient algorithm to compute the balance-coefficient. Finally, we propose an optimal solution to reduce the balance-coefficient to enhance the robustness of the given system. Comprehensive experiments confirm the efficiency of our algorithms.
Three-dimensional self-adaptive grid method for complex flows
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Deiwert, George S.
1988-01-01
A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.
Radial Inflow Turboexpander Redesign
DOE Office of Scientific and Technical Information (OSTI.GOV)
William G. Price
2001-09-24
Steamboat Envirosystems, LLC (SELC) was awarded a grant in accordance with the DOE Enhanced Geothermal Systems Project Development. Atlas-Copco Rotoflow (ACR), a radial expansion turbine manufacturer, was responsible for the manufacturing of the turbine and the creation of the new computer program. SB Geo, Inc. (SBG), the facility operator, monitored and assisted ACR's activities as well as provided installation and startup assistance. The primary scope of the project is the redesign of an axial flow turbine to a radial inflow turboexpander to provide increased efficiency and reliability at an existing facility. In addition to the increased efficiency and reliability, themore » redesign includes an improved reduction gear design, and improved shaft seal design, and upgraded control system and a greater flexibility of application« less
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
Computationally guided discovery of thermoelectric materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorai, Prashun; Stevanović, Vladan; Toberer, Eric S.
The potential for advances in thermoelectric materials, and thus solid-state refrigeration and power generation, is immense. Progress so far has been limited by both the breadth and diversity of the chemical space and the serial nature of experimental work. In this Review, we discuss how recent computational advances are revolutionizing our ability to predict electron and phonon transport and scattering, as well as materials dopability, and we examine efficient approaches to calculating critical transport properties across large chemical spaces. When coupled with experimental feedback, these high-throughput approaches can stimulate the discovery of new classes of thermoelectric materials. Within smaller materialsmore » subsets, computations can guide the optimal chemical and structural tailoring to enhance materials performance and provide insight into the underlying transport physics. Beyond perfect materials, computations can be used for the rational design of structural and chemical modifications (such as defects, interfaces, dopants and alloys) to provide additional control on transport properties to optimize performance. Through computational predictions for both materials searches and design, a new paradigm in thermoelectric materials discovery is emerging.« less
Computationally guided discovery of thermoelectric materials
Gorai, Prashun; Stevanović, Vladan; Toberer, Eric S.
2017-08-22
The potential for advances in thermoelectric materials, and thus solid-state refrigeration and power generation, is immense. Progress so far has been limited by both the breadth and diversity of the chemical space and the serial nature of experimental work. In this Review, we discuss how recent computational advances are revolutionizing our ability to predict electron and phonon transport and scattering, as well as materials dopability, and we examine efficient approaches to calculating critical transport properties across large chemical spaces. When coupled with experimental feedback, these high-throughput approaches can stimulate the discovery of new classes of thermoelectric materials. Within smaller materialsmore » subsets, computations can guide the optimal chemical and structural tailoring to enhance materials performance and provide insight into the underlying transport physics. Beyond perfect materials, computations can be used for the rational design of structural and chemical modifications (such as defects, interfaces, dopants and alloys) to provide additional control on transport properties to optimize performance. Through computational predictions for both materials searches and design, a new paradigm in thermoelectric materials discovery is emerging.« less
NASA Astrophysics Data System (ADS)
Ethier, Stephane; Lin, Zhihong
2001-10-01
Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)
Computational and Physical Analysis of Catalytic Compounds
NASA Astrophysics Data System (ADS)
Wu, Richard; Sohn, Jung Jae; Kyung, Richard
2015-03-01
Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.
Some practical universal noiseless coding techniques, part 2
NASA Technical Reports Server (NTRS)
Rice, R. F.; Lee, J. J.
1983-01-01
This report is an extension of earlier work (Part 1) which provided practical adaptive techniques for the efficient noiseless coding of a broad class of data sources characterized by only partially known and varying statistics (JPL Publication 79-22). The results here, while still claiming such general applicability, focus primarily on the noiseless coding of image data. A fairly complete and self-contained treatment is provided. Particular emphasis is given to the requirements of the forthcoming Voyager II encounters of Uranus and Neptune. Performance evaluations are supported both graphically and pictorially. Expanded definitions of the algorithms in Part 1 yield a computationally improved set of options for applications requiring efficient performance at entropies above 4 bits/sample. These expanded definitions include as an important subset, a somewhat less efficient but extremely simple "FAST' compressor which will be used at the Voyager Uranus encounter. Additionally, options are provided which enhance performance when atypical data spikes may be present.
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
Nuclear spin cooling by electric dipole spin resonance and coherent population trapping
NASA Astrophysics Data System (ADS)
Li, Ai-Xian; Duan, Su-Qing; Zhang, Wei
2017-09-01
Nuclear spin fluctuation suppression is a key issue in preserving electron coherence for quantum information/computation. We propose an efficient way of nuclear spin cooling in semiconductor quantum dots (QDs) by the coherent population trapping (CPT) and the electric dipole spin resonance (EDSR) induced by optical fields and ac electric fields. The EDSR can enhance the spin flip-flop rate and may bring out bistability under certain conditions. By tuning the optical fields, we can avoid the EDSR induced bistability and obtain highly polarized nuclear spin state, which results in long electron coherence time. With the help of CPT and EDSR, an enhancement of 1500 times of the electron coherence time can been obtained after a 500 ns preparation time.
Contrast enhancement in EIT imaging of the brain.
Nissinen, A; Kaipio, J P; Vauhkonen, M; Kolehmainen, V
2016-01-01
We consider electrical impedance tomography (EIT) imaging of the brain. The brain is surrounded by the poorly conducting skull which has low conductivity compared to the brain. The skull layer causes a partial shielding effect which leads to weak sensitivity for the imaging of the brain tissue. In this paper we propose an approach based on the Bayesian approximation error approach, to enhance the contrast in brain imaging. With this approach, both the (uninteresting) geometry and the conductivity of the skull are embedded in the approximation error statistics, which leads to a computationally efficient algorithm that is able to detect features such as internal haemorrhage with significantly increased sensitivity and specificity. We evaluate the approach with simulations and phantom data.
Multifunctional Gold Nanostars for Molecular Imaging and Cancer Therapy
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Hsiangkuo; Fales, Andrew; Register, Janna; Vo-Dinh, Tuan
2015-08-01
Plasmonics-active gold nanoparticles offer excellent potential in molecular imaging and cancer therapy. Among them, gold nanostars (AuNS) exhibit cross-platform flexibility as multimodal contrast agents for macroscopic X-ray computer tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), as well as nanoprobes for photoacoustic tomography (PAT), two-photon photoluminescence (TPL) and surface-enhanced Raman spectroscopy (SERS). Their surfactant-free surface enables versatile functionalization to enhance cancer targeting, and allow triggered drug release. AuNS can also be used as an efficient platform for drug carrying, photothermal therapy, and photodynamic therapy. This review paper presents the latest progress regarding AuNS as a promising nanoplatform for cancer nanotheranostics. Future research directions with AuNS for biomedical applications will also be discussed.
Polarization control of spontaneous emission for rapid quantum-state initialization
NASA Astrophysics Data System (ADS)
DiLoreto, C. S.; Rangan, C.
2017-04-01
We propose an efficient method to selectively enhance the spontaneous emission rate of a quantum system by changing the polarization of an incident control field, and exploiting the polarization dependence of the system's spontaneous emission rate. This differs from the usual Purcell enhancement of spontaneous emission rates as it can be selectively turned on and off. Using a three-level Λ system in a quantum dot placed in between two silver nanoparticles and a linearly polarized, monochromatic driving field, we present a protocol for rapid quantum state initialization, while maintaining long coherence times for control operations. This process increases the overall amount of time that a quantum system can be effectively utilized for quantum operations, and presents a key advance in quantum computing.
Computationally efficient finite-difference modal method for the solution of Maxwell's equations.
Semenikhin, Igor; Zanuccoli, Mauro
2013-12-01
In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Photon-trapping microstructures enable high-speed high-efficiency silicon photodiodes
NASA Astrophysics Data System (ADS)
Gao, Yang; Cansizoglu, Hilal; Polat, Kazim G.; Ghandiparsi, Soroush; Kaya, Ahmet; Mamtaz, Hasina H.; Mayet, Ahmed S.; Wang, Yinan; Zhang, Xinzhi; Yamada, Toshishige; Devine, Ekaterina Ponizovskaya; Elrefaie, Aly F.; Wang, Shih-Yuan; Islam, M. Saif
2017-04-01
High-speed, high-efficiency photodetectors play an important role in optical communication links that are increasingly being used in data centres to handle higher volumes of data traffic and higher bandwidths, as big data and cloud computing continue to grow exponentially. Monolithic integration of optical components with signal-processing electronics on a single silicon chip is of paramount importance in the drive to reduce cost and improve performance. We report the first demonstration of micro- and nanoscale holes enabling light trapping in a silicon photodiode, which exhibits an ultrafast impulse response (full-width at half-maximum) of 30 ps and a high efficiency of more than 50%, for use in data-centre optical communications. The photodiode uses micro- and nanostructured holes to enhance, by an order of magnitude, the absorption efficiency of a thin intrinsic layer of less than 2 µm thickness and is designed for a data rate of 20 gigabits per second or higher at a wavelength of 850 nm. Further optimization can improve the efficiency to more than 70%.
Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.
Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Flamm, Jeffrey D.; Berrier, Bobby L.; Johnson, Stuart K.
2007-01-01
A computational investigation of an axisymmetric Dual Throat Nozzle concept has been conducted. This fluidic thrust-vectoring nozzle was designed with a recessed cavity to enhance the throat shifting technique for improved thrust vectoring. The structured-grid, unsteady Reynolds- Averaged Navier-Stokes flow solver PAB3D was used to guide the nozzle design and analyze performance. Nozzle design variables included extent of circumferential injection, cavity divergence angle, cavity length, and cavity convergence angle. Internal nozzle performance (wind-off conditions) and thrust vector angles were computed for several configurations over a range of nozzle pressure ratios from 1.89 to 10, with the fluidic injection flow rate equal to zero and up to 4 percent of the primary flow rate. The effect of a variable expansion ratio on nozzle performance over a range of freestream Mach numbers up to 2 was investigated. Results indicated that a 60 circumferential injection was a good compromise between large thrust vector angles and efficient internal nozzle performance. A cavity divergence angle greater than 10 was detrimental to thrust vector angle. Shortening the cavity length improved internal nozzle performance with a small penalty to thrust vector angle. Contrary to expectations, a variable expansion ratio did not improve thrust efficiency at the flight conditions investigated.
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
Edge enhancement and noise suppression for infrared image based on feature analysis
NASA Astrophysics Data System (ADS)
Jiang, Meng
2018-06-01
Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.
Optimizing phase to enhance optical trap stiffness.
Taylor, Michael A
2017-04-03
Phase optimization offers promising capabilities in optical tweezers, allowing huge increases in the applied forces, trap stiff-ness, or measurement sensitivity. One key obstacle to potential applications is the lack of an efficient algorithm to compute an optimized phase profile, with enhanced trapping experiments relying on slow programs that would take up to a week to converge. Here we introduce an algorithm that reduces the wait from days to minutes. We characterize the achievable in-crease in trap stiffness and its dependence on particle size, refractive index, and optical polarization. We further show that phase-only control can achieve almost all of the enhancement possible with full wavefront shaping; for instance phase control allows 62 times higher trap stiffness for 10 μm silica spheres in water, while amplitude control and non-trivial polarization further increase this by 1.26 and 1.01 respectively. This algorithm will facilitate future applications in optical trapping, and more generally in wavefront optimization.
The special effort processing of FGGE data
NASA Technical Reports Server (NTRS)
1983-01-01
The basic FGGE level IIb data set was enhanced. It focused on removing deficiencies in the objective methods of quality assurance, removing efficiencies in certain types of operationally produced satellite soundings, and removing deficiencies in certain types of operationally produced cloud tracked winds. The Special Effort was a joint NASA-NOAA-University of Wisconsin effort. The University of Wisconsin installed an interactive McIDAS capability on the Amdahl computer at the Goddard Laboratory of Atmospheric Sciences (GLAS) with one interactive video terminal at Goddard and the other at the World Weather Building. With this interactive capability a joint processing effort was undertaken to reprocess certain FGGE data sets. NOAA produced a specially edited data set for the special observing periods (SOPs) of FGGE. NASA produced an enhanced satellite sounding data set for the SOPs while the University of Wisconsin produced an enhanced cloud tracked wind set from the Japanese geostationary satellite images.
Degradation of brominated flame retardant in computer housing plastic by supercritical fluids.
Wang, Yanmin; Zhang, Fu-Shen
2012-02-29
The degradation process of brominated flame retardant (BFR) and BFR-containing waste computer housing plastic in various supercritical fluids (water, methanol, isopropanol and acetone) was investigated. The results showed that the debromination and degradation efficiencies, final products were greatly affected by the solvent type. Among the four tested solvents, isopropanol was the most suitable solvent for the recovery of oil from BFR-containing plastic for its (1) excellent debromination effectiveness (debromination efficiency 95.7%), (2) high oil production (60.0%) and (3) mild temperature and pressure requirements. However, in this case, the removed bromine mostly existed in the oil. Introduction of KOH into the sc-isopropanol could capture almost all the inorganic bromine from the oil thus bromine-free oil could be obtained. Furthermore, KOH could enhance the depolymerization of the plastic. The obtained oil mainly consisted of single- and duplicate-ringed aromatic compounds in a carbon range of C9-C17, which had alkyl substituents or aliphatic bridges, such as butyl-benzene, (3-methylbutyl)-benzene, 1,1'-(1,3-propanediyl)bis benzene. Phenol, alkyl phenols and esters were the major oxygen-containing compounds in the oil. This study provides an efficient approach for debromination and simultaneous recovering valuable chemicals from BFR-containing plastic in e-waste. Copyright © 2011 Elsevier B.V. All rights reserved.
Mishra, Dheerendra
2015-03-01
Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.
Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication
Chen, Chien-Sheng
2015-01-01
To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
Pourmehran, Oveis; Gorji, Tahereh B; Gorji-Bandpy, Mofid
2016-10-01
Magnetic drug targeting (MDT) is a local drug delivery system which aims to concentrate a pharmacological agent at its site of action in order to minimize undesired side effects due to systemic distribution in the organism. Using magnetic drug particles under the influence of an external magnetic field, the drug particles are navigated toward the target region. Herein, computational fluid dynamics was used to simulate the air flow and magnetic particle deposition in a realistic human airway geometry obtained by CT scan images. Using discrete phase modeling and one-way coupling of particle-fluid phases, a Lagrangian approach for particle tracking in the presence of an external non-uniform magnetic field was applied. Polystyrene (PMS40) particles were utilized as the magnetic drug carrier. A parametric study was conducted, and the influence of particle diameter, magnetic source position, magnetic field strength and inhalation condition on the particle transport pattern and deposition efficiency (DE) was reported. Overall, the results show considerable promise of MDT in deposition enhancement at the target region (i.e., left lung). However, the positive effect of increasing particle size on DE enhancement was evident at smaller magnetic field strengths (Mn [Formula: see text] 1.5 T), whereas, at higher applied magnetic field strengths, increasing particle size has a inverse effect on DE. This implies that for efficient MTD in the human respiratory system, an optimal combination of magnetic drug career characteristics and magnetic field strength has to be achieved.
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Liu, Shuang; Abbas, Mahmoud Ahmed
2018-04-01
The vertical gradient is an essential tool in interpretation algorithms. It is also the primary enhancement technique to improve the resolution of measured gravity and magnetic field data, since it has higher sensitivity to changes in physical properties (density or susceptibility) of the subsurface structures than the measured field. If the field derivatives are not directly measured with the gradiometers, they can be calculated from the collected gravity or magnetic data using numerical methods such as those based on fast Fourier transform technique. The gradients behave similar to high-pass filters and enhance the short-wavelength anomalies which may be associated with either small-shallow sources or high-frequency noise content in data, and their numerical computation is susceptible to suffer from amplification of noise. This behaviour can adversely affect the stability of the derivatives in the presence of even a small level of the noise and consequently limit their application to interpretation methods. Adding a smoothing term to the conventional formulation of calculating the vertical gradient in Fourier domain can improve the stability of numerical differentiation of the field. In this paper, we propose a strategy in which the overall efficiency of the classical algorithm in Fourier domain is improved by incorporating two different smoothing filters. For smoothing term, a simple qualitative procedure based on the upward continuation of the field to a higher altitude is introduced to estimate the related parameters which are called regularization parameter and cut-off wavenumber in the corresponding filters. The efficiency of these new approaches is validated by computing the first- and second-order derivatives of noise-corrupted synthetic data sets and then comparing the results with the true ones. The filtered and unfiltered vertical gradients are incorporated into the extended Euler deconvolution to estimate the depth and structural index of a magnetic sphere, hence, quantitatively evaluating the methods. In the real case, the described algorithms are used to enhance a portion of aeromagnetic data acquired in Mackenzie Corridor, Northern Mainland, Canada.
2013-01-01
Background Handheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice. Objective A scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work. Methods A detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer. Results Five systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency. Conclusions This scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing future research. PMID:24165786
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-05
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.
Yarn-dyed fabric defect classification based on convolutional neural network
NASA Astrophysics Data System (ADS)
Jing, Junfeng; Dong, Amei; Li, Pengfei
2017-07-01
Considering that the manual inspection of the yarn-dyed fabric can be time consuming and less efficient, a convolutional neural network (CNN) solution based on the modified AlexNet structure for the classification of the yarn-dyed fabric defect is proposed. CNN has powerful ability of feature extraction and feature fusion which can simulate the learning mechanism of the human brain. In order to enhance computational efficiency and detection accuracy, the local response normalization (LRN) layers in AlexNet are replaced by the batch normalization (BN) layers. In the process of the network training, through several convolution operations, the characteristics of the image are extracted step by step, and the essential features of the image can be obtained from the edge features. And the max pooling layers, the dropout layers, the fully connected layers are also employed in the classification model to reduce the computation cost and acquire more precise features of fabric defect. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show the capability of defect classification via the modified Alexnet model and indicate its robustness.
Atoche, Alejandro Castillo; Castillo, Javier Vázquez
2012-01-01
A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964
Reinforced dynamics for enhanced sampling in large atomic and molecular systems
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Wang, Han; E, Weinan
2018-03-01
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
Cost-effective forensic image enhancement
NASA Astrophysics Data System (ADS)
Dalrymple, Brian E.
1998-12-01
In 1977, a paper was presented at the SPIE conference in Reston, Virginia, detailing the computer enhancement of the Zapruder film. The forensic value of this examination in a major homicide investigation was apparent to the viewer. Equally clear was the potential for extracting evidence which is beyond the reach of conventional detection techniques. The cost of this technology in 1976, however, was prohibitive, and well beyond the means of most police agencies. Twenty-two years later, a highly efficient means of image enhancement is easily within the grasp of most police agencies, not only for homicides but for any case application. A PC workstation combined with an enhancement software package allows a forensic investigator to fully exploit digital technology. The goal of this approach is the optimization of the signal to noise ratio in images. Obstructive backgrounds may be diminished or eliminated while weak signals are optimized by the use of algorithms including Fast Fourier Transform, Histogram Equalization and Image Subtraction. An added benefit is the speed with which these processes are completed and the results known. The efficacy of forensic image enhancement is illustrated through case applications.
Plasmonically Enhanced Reflectance of Heat Radiation from Low-Bandgap Semiconductor Microinclusions.
Tang, Janika; Thakore, Vaibhav; Ala-Nissila, Tapio
2017-07-18
Increased reflectance from the inclusion of highly scattering particles at low volume fractions in an insulating dielectric offers a promising way to reduce radiative thermal losses at high temperatures. Here, we investigate plasmonic resonance driven enhanced scattering from microinclusions of low-bandgap semiconductors (InP, Si, Ge, PbS, InAs and Te) in an insulating composite to tailor its infrared reflectance for minimizing thermal losses from radiative transfer. To this end, we compute the spectral properties of the microcomposites using Monte Carlo modeling and compare them with results from Fresnel equations. The role of particle size-dependent Mie scattering and absorption efficiencies, and, scattering anisotropy are studied to identify the optimal microinclusion size and material parameters for maximizing the reflectance of the thermal radiation. For composites with Si and Ge microinclusions we obtain reflectance efficiencies of 57-65% for the incident blackbody radiation from sources at temperatures in the range 400-1600 °C. Furthermore, we observe a broadbanding of the reflectance spectra from the plasmonic resonances due to charge carriers generated from defect states within the semiconductor bandgap. Our results thus open up the possibility of developing efficient high-temperature thermal insulators through use of the low-bandgap semiconductor microinclusions in insulating dielectrics.
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
Chang, Zheng; Xiang, Qing-San; Shen, Hao; Yin, Fang-Fang
2010-03-01
To accelerate non-contrast-enhanced MR angiography (MRA) with inflow inversion recovery (IFIR) with a fast imaging method, Skipped Phase Encoding and Edge Deghosting (SPEED). IFIR imaging uses a preparatory inversion pulse to reduce signals from static tissue, while leaving inflow arterial blood unaffected, resulting in sparse arterial vasculature on modest tissue background. By taking advantage of vascular sparsity, SPEED can be simplified with a single-layer model to achieve higher efficiency in both scan time reduction and image reconstruction. SPEED can also make use of information available in multiple coils for further acceleration. The techniques are demonstrated with a three-dimensional renal non-contrast-enhanced IFIR MRA study. Images are reconstructed by SPEED based on a single-layer model to achieve an undersampling factor of up to 2.5 using one skipped phase encoding direction. By making use of information available in multiple coils, SPEED can achieve an undersampling factor of up to 8.3 with four receiver coils. The reconstructed images generally have comparable quality as that of the reference images reconstructed from full k-space data. As demonstrated with a three-dimensional renal IFIR scan, SPEED based on a single-layer model is able to reduce scan time further and achieve higher computational efficiency than the original SPEED.
Younes, Ali H; Zhang, Lu; Clark, Ronald J; Davidson, Michael W; Zhu, Lei
2010-12-07
Two fluorescent heteroditopic ligands (2a and 2b) for zinc ion were synthesized and studied. The efficiencies of two photophysical processes, intramolecular charge transfer (ICT) and photoinduced electron transfer (PET), determine the magnitudes of emission bathochromic shift and enhancement, respectively, when a heteroditopic ligand forms mono- or dizinc complexes. The electron-rich 2b is characterized by a high degree of ICT in the excited state with little propensity for PET, which is manifested in a large bathochromic shift of emission upon Zn(2+) coordination without enhancement in fluorescence quantum yield. The electron-poor 2a displays the opposite photophysical consequence where Zn(2+) binding results in greatly enhanced emission without significant spectral shift. The electronic structural effects on the relative efficiencies of ICT and PET in 2a and 2b as well as the impact of Zn(2+)-coordination are probed using experimental and computational approaches. This study reveals that the delicate balance between various photophysical pathways (e.g. ICT and PET) engineered in a heteroditopic ligand is sensitively dependent on the electronic structure of the ligand, i.e. whether the fluorophore is electron-rich or poor, whether it possesses a donor-acceptor type of structure, and where the metal binding occurs.
Ji, Haoran; Wang, Chengshan; Li, Peng; ...
2017-09-20
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
Defined Host–Guest Chemistry on Nanocarbon for Sustained Inhibition of Cancer
Ostadhossein, Fatemeh; Misra, Santosh K.; Mukherjee, Prabuddha; Ostadhossein, Alireza; Daza, Enrique; Tiwari, Saumya; Mittal, Shachi; Gryka, Mark C.; Bhargava, Rohit
2017-01-01
Signal transducer and activator of transcription factor 3 (STAT-3) is known to be overexpressed in cancer stem cells. Poor solubility and variable drug absorption are linked to low bioavailability and decreased efficacy. Many of the drugs regulating STAT-3 expression lack aqueous solubility; hence hindering efficient bioavailability. A theranostics nanoplatform based on luminescent carbon particles decorated with cucurbit[6]uril is introduced for enhancing the solubility of niclosamide, a STAT-3 inhibitor. The host–guest chemistry between cucurbit[6]uril and niclosamide makes the delivery of the hydrophobic drug feasible while carbon nanoparticles enhance cellular internalization. Extensive physicochemical characterizations confirm successful synthesis. Subsequently, the host–guest chemistry of niclosamide and cucurbit[6]uril is studied experimentally and computationally. In vitro assessments in human breast cancer cells indicate approximately twofold enhancement in IC50 of drug. Fourier transform infrared and fluorescence imaging demonstrate efficient cellular internalization. Furthermore, the catalytic biodegradation of the nanoplatforms occur upon exposure to human myeloperoxidase in short time. In vivo studies on athymic mice with MCF-7 xenograft indicate the size of tumor in the treatment group is half of the controls after 40 d. Immunohistochemistry corroborates the downregulation of STAT-3 phosphorylation. Overall, the host–guest chemistry on nanocarbon acts as a novel arsenal for STAT-3 inhibition. PMID:27545321
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Haoran; Wang, Chengshan; Li, Peng
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Zaghi, S.
2014-07-01
OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git has been adopted in order to facilitate the collaborative maintenance and improvement of the code; CopyrightsOFF is a free software that anyone can use, copy, distribute, study, change and improve under the GNU Public License version 3. The present paper is a manifesto of OFF code and presents the currently implemented features and ongoing developments. This work is focused on the computational techniques adopted and a detailed description of the main API characteristics is reported. OFF capabilities are demonstrated by means of one and two dimensional examples and a three dimensional real application.
Internal quantum efficiency enhancement of GaInN/GaN quantum-well structures using Ag nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iida, Daisuke; Department of Photonics Engineering, Technical University of Denmark, 2800 Lyngby; Faculty of Science and Technology, Meijo University, 1-501 Shiogamaguchi Tempaku, 468-8502 Nagoya
2015-09-15
We report internal quantum efficiency enhancement of thin p-GaN green quantum-well structure using self-assembled Ag nanoparticles. Temperature dependent photoluminescence measurements are conducted to determine the internal quantum efficiency. The impact of excitation power density on the enhancement factor is investigated. We obtain an internal quantum efficiency enhancement by a factor of 2.3 at 756 W/cm{sup 2}, and a factor of 8.1 at 1 W/cm{sup 2}. A Purcell enhancement up to a factor of 26 is estimated by fitting the experimental results to a theoretical model for the efficiency enhancement factor.
Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.
Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori
2016-12-13
The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling.
Rodinger, Tomas; Howell, P Lynne; Pomès, Régis
2008-10-21
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling
NASA Astrophysics Data System (ADS)
Rodinger, Tomas; Howell, P. Lynne; Pomès, Régis
2008-10-01
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Analytical Cost Metrics : Days of Future Past
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less
Tam, S F
2000-10-15
The aim of this controlled, quasi-experimental study was to evaluate the effects of both self-efficacy enhancement and social comparison training strategy on computer skills learning and self-concept outcome of trainees with physical disabilities. The self-efficacy enhancement group comprised 16 trainees, the tutorial training group comprised 15 trainees, and there were 25 subjects in the control group. Both the self-efficacy enhancement group and the tutorial training group received a 15 week computer skills training course, including generic Chinese computer operation, Chinese word processing and Chinese desktop publishing skills. The self-efficacy enhancement group received training with tutorial instructions that incorporated self-efficacy enhancement strategies and experienced self-enhancing social comparisons. The tutorial training group received behavioural learning-based tutorials only, and the control group did not receive any training. The following measurements were employed to evaluate the outcomes: the Self-Concept Questionnaire for the Physically Disabled Hong Kong Chinese (SCQPD), the computer self-efficacy rating scale and the computer performance rating scale. The self-efficacy enhancement group showed significantly better computer skills learning outcome, total self-concept, and social self-concept than the tutorial training group. The self-efficacy enhancement group did not show significant changes in their computer self-efficacy: however, the tutorial training group showed a significant lowering of their computer self-efficacy. The training strategy that incorporated self-efficacy enhancement and positive social comparison experiences maintained the computer self-efficacy of trainees with physical disabilities. This strategy was more effective in improving the learning outcome (p = 0.01) and self-concept (p = 0.05) of the trainees than the conventional tutorial-based training strategy.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
NASA Astrophysics Data System (ADS)
Moortgat, Joachim
2018-04-01
This work presents an efficient reservoir simulation framework for multicomponent, multiphase, compressible flow, based on the cubic-plus-association (CPA) equation of state (EOS). CPA is an accurate EOS for mixtures that contain non-polar hydrocarbons, self-associating polar water, and cross-associating molecules like methane, ethane, unsaturated hydrocarbons, CO2, and H2S. While CPA is accurate, its mathematical formulation is highly non-linear, resulting in excessive computational costs that have made the EOS unfeasible for large scale reservoir simulations. This work presents algorithms that overcome these bottlenecks and achieve an efficiency comparable to the much simpler cubic EOS approach. The main applications that require such accurate phase behavior modeling are 1) the study of methane leakage from high-pressure production wells and its potential impact on groundwater resources, 2) modeling of geological CO2 sequestration in brine aquifers when one is interested in more than the CO2 and H2O components, e.g. methane, other light hydrocarbons, and various tracers, and 3) enhanced oil recovery by CO2 injection in reservoirs that have previously been waterflooded or contain connate water. We present numerical examples of all those scenarios, extensive validation of the CPA EOS with experimental data, and analyses of the efficiency of our proposed numerical schemes. The accuracy, efficiency, and robustness of the presented phase split computations pave the way to more widespread adoption of CPA in reservoir simulators.
An online semi-supervised brain-computer interface.
Gu, Zhenghui; Yu, Zhuliang; Shen, Zhifang; Li, Yuanqing
2013-09-01
Practical brain-computer interface (BCI) systems should require only low training effort for the user, and the algorithms used to classify the intent of the user should be computationally efficient. However, due to inter- and intra-subject variations in EEG signal, intermittent training/calibration is often unavoidable. In this paper, we present an online semi-supervised P300 BCI speller system. After a short initial training (around or less than 1 min in our experiments), the system is switched to a mode where the user can input characters through selective attention. In this mode, a self-training least squares support vector machine (LS-SVM) classifier is gradually enhanced in back end with the unlabeled EEG data collected online after every character input. In this way, the classifier is gradually enhanced. Even though the user may experience some errors in input at the beginning due to the small initial training dataset, the accuracy approaches that of fully supervised method in a few minutes. The algorithm based on LS-SVM and its sequential update has low computational complexity; thus, it is suitable for online applications. The effectiveness of the algorithm has been validated through data analysis on BCI Competition III dataset II (P300 speller BCI data). The performance of the online system was evaluated through experimental results on eight healthy subjects, where all of them achieved the spelling accuracy of 85 % or above within an average online semi-supervised learning time of around 3 min.
Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data
2017-03-02
AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data Philip Walther UNIVERSITT WIEN Final...REPORT TYPE Final 3. DATES COVERED (From - To) 15 Oct 2015 to 31 Dec 2016 4. TITLE AND SUBTITLE Quantum-Enhanced Cyber Security: Experimental Computation...FORM SF 298 Final Report for FA9550-1-6-1-0004 Quantum-enhanced cyber security: Experimental quantum computation with quantum-encrypted data
Protein Hydration Thermodynamics: The Influence of Flexibility and Salt on Hydrophobin II Hydration.
Remsing, Richard C; Xi, Erte; Patel, Amish J
2018-04-05
The solubility of proteins and other macromolecular solutes plays an important role in numerous biological, chemical, and medicinal processes. An important determinant of protein solubility is the solvation free energy of the protein, which quantifies the overall strength of the interactions between the protein and the aqueous solution that surrounds it. Here we present an all-atom explicit-solvent computational framework for the rapid estimation of protein solvation free energies. Using this framework, we estimate the hydration free energy of hydrophobin II, an amphiphilic fungal protein, in a computationally efficient manner. We further explore how the protein hydration free energy is influenced by enhancing flexibility and by the addition of sodium chloride, and find that it increases in both cases, making protein hydration less favorable.
Study on validation method for femur finite element model under multiple loading conditions
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu
2018-03-01
Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.
Enhanced Sampling in the Well-Tempered Ensemble
NASA Astrophysics Data System (ADS)
Bonomi, M.; Parrinello, M.
2010-05-01
We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.
Enhanced sampling in the well-tempered ensemble.
Bonomi, M; Parrinello, M
2010-05-14
We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.
Opportunities and choice in a new vector era
NASA Astrophysics Data System (ADS)
Nowak, A.
2014-06-01
This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.
Zhang, Wei; Ding, Dong-Sheng; Dong, Ming-Xin; Shi, Shuai; Wang, Kai; Liu, Shi-Long; Li, Yan; Zhou, Zhi-Yuan; Shi, Bao-Sen; Guo, Guang-Can
2016-11-14
Entanglement in multiple degrees of freedom has many benefits over entanglement in a single one. The former enables quantum communication with higher channel capacity and more efficient quantum information processing and is compatible with diverse quantum networks. Establishing multi-degree-of-freedom entangled memories is not only vital for high-capacity quantum communication and computing, but also promising for enhanced violations of nonlocality in quantum systems. However, there have been yet no reports of the experimental realization of multi-degree-of-freedom entangled memories. Here we experimentally established hyper- and hybrid entanglement in multiple degrees of freedom, including path (K-vector) and orbital angular momentum, between two separated atomic ensembles by using quantum storage. The results are promising for achieving quantum communication and computing with many degrees of freedom.
Most recent common ancestor probability distributions in gene genealogies under selection.
Slade, P F
2000-12-01
A computational study is made of the conditional probability distribution for the allelic type of the most recent common ancestor in genealogies of samples of n genes drawn from a population under selection, given the initial sample configuration. Comparisons with the corresponding unconditional cases are presented. Such unconditional distributions differ from samples drawn from the unique stationary distribution of population allelic frequencies, known as Wright's formula, and are quantified. Biallelic haploid and diploid models are considered. A simplified structure for the ancestral selection graph of S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237) is enhanced further, reducing the effective branching rate in the graph. This improves efficiency of such a nonneutral analogue of the coalescent for use with computational likelihood-inference techniques.
Experimental Realization of High-Efficiency Counterfactual Computation.
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-21
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Technology in the teaching of neuroscience: enhanced student learning.
Griffin, John D
2003-12-01
The primary motivation for integrating any form of education technology into a particular course or curriculum should always be to enhance student learning. However, it can be difficult to determine which technologies will be the most appropriate and effective teaching tools. Through the alignment of technology-enhanced learning experiences with a clear set of learning objectives, teaching becomes more efficient and effective and learning is truly enhanced. In this article, I describe how I have made extensive use of technology in two neuroscience courses that differ in structure and content. Course websites function as resource centers and provide a forum for student interaction. PowerPoint presentations enhance formal lectures and provide an organized outline of presented material. Some lectures are also supplemented with interactive CD-ROMs, used in the presentation of difficult physiological concepts. In addition, a computer-based physiological recording system is used in laboratory sessions, improving the hands-on experience of group learning while reinforcing the concepts of the research method. Although technology can provide powerful teaching tools, the enhancement of the learning environment is still dependent on the instructor. It is the skill and enthusiasm of the instructor that determines whether technology will be used effectively.
Chaotic dynamics in accelerator physics
NASA Astrophysics Data System (ADS)
Cary, J. R.
1992-11-01
Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
Substructured multibody molecular dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary Stephen; Stevens, Mark Jackson; Plimpton, Steven James
2006-11-01
We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.
An object oriented Python interface for atomistic simulations
NASA Astrophysics Data System (ADS)
Hynninen, T.; Himanen, L.; Parkkinen, V.; Musso, T.; Corander, J.; Foster, A. S.
2016-01-01
Programmable simulation environments allow one to monitor and control calculations efficiently and automatically before, during, and after runtime. Environments directly accessible in a programming environment can be interfaced with powerful external analysis tools and extensions to enhance the functionality of the core program, and by incorporating a flexible object based structure, the environments make building and analysing computational setups intuitive. In this work, we present a classical atomistic force field with an interface written in Python language. The program is an extension for an existing object based atomistic simulation environment.
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
NASA Astrophysics Data System (ADS)
Wu, Lianglong; Fu, Xiquan; Guo, Xing
2013-03-01
In this paper, we propose a modified adaptive algorithm (MAA) of dealing with the high chirp to efficiently simulate the propagation of chirped pulses along an optical fiber for the propagation distance shorter than the "temporal focal length". The basis of the MAA is that the chirp term of initial pulse is treated as the rapidly varying part by means of the idea of the slowly varying envelope approximation (SVEA). Numerical simulations show that the performance of the MAA is validated, and that the proposed method can decrease the number of sampling points by orders of magnitude. In addition, the computational efficiency of the MAA compared with the time-domain beam propagation method (BPM) can be enhanced with the increase of the chirp of initial pulse.
Mining Quality Phrases from Massive Text Corpora
Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei
2015-01-01
Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375
Accounting for quality in the measurement of hospital performance: evidence from Costa Rica.
Arocena, Pablo; García-Prado, Ariadna
2007-07-01
This paper provides insights into how Costa Rican public hospitals responded to the pressure for increased efficiency and quality introduced by the reforms carried out over the period 1997-2001. To that purpose we compute a generalized output distance function by means of non-parametric mathematical programming to construct a productivity index, which accounts for productivity changes while controlling for quality of care. Our results show an improvement in hospital performance mainly driven by quality increases. The adoption of management contracts seems to have contributed to such enhancement, more notably for small hospitals. Further, productivity growth is primarily due to technical and scale efficiency change rather than technological change. A number of policy implications are drawn from these results. Copyright (c) 2006 John Wiley & Sons, Ltd.
McCarty, James; Parrinello, Michele
2017-11-28
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
NASA Astrophysics Data System (ADS)
McCarty, James; Parrinello, Michele
2017-11-01
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
Prediction-based Dynamic Energy Management in Wireless Sensor Networks
Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei
2007-01-01
Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.
Fuel-air mixing and distribution in a direct-injection stratified-charge rotary engine
NASA Technical Reports Server (NTRS)
Abraham, J.; Bracco, F. V.
1989-01-01
A three-dimensional model for flows and combustion in reciprocating and rotary engines is applied to a direct-injection stratified-charge rotary engine to identify the main parameters that control its burning rate. It is concluded that the orientation of the six sprays of the main injector with respect to the air stream is important to enhance vaporization and the production of flammable mixture. In particular, no spray should be in the wake of any other spray. It was predicted that if such a condition is respected, the indicated efficiency would increase by some 6 percent at higher loads and 2 percent at lower loads. The computations led to the design of a new injector tip that has since yielded slightly better efficiency gains than predicted.
Bioadsorber efficiency, design, and performance forecasting for alachlor removal.
Badriyha, Badri N; Ravindran, Varadarajan; Den, Walter; Pirbazari, Massoud
2003-10-01
This study discusses a mathematical modeling and design protocol for bioactive granular activated carbon (GAC) adsorbers employed for purification of drinking water contaminated by chlorinated pesticides, exemplified by alachlor. A thin biofilm model is discussed that incorporates the following phenomenological aspects: film transfer from the bulk fluid to the adsorbent particles, diffusion through the biofilm immobilized on adsorbent surface, adsorption of the contaminant into the adsorbent particle. The modeling approach involved independent laboratory-scale experiments to determine the model input parameters. These experiments included adsorption isotherm studies, adsorption rate studies, and biokinetic studies. Bioactive expanded-bed adsorber experiments were conducted to obtain realistic experimental data for determining the ability of the model for predicting adsorber dynamics under different operating conditions. The model equations were solved using a computationally efficient hybrid numerical technique combining orthogonal collocation and finite difference methods. The model provided accurate predictions of adsorber dynamics for bioactive and non-bioactive scenarios. Sensitivity analyses demonstrated the significance of various model parameters, and focussed on enhancement in certain key parameters to improve the overall process efficiency. Scale-up simulation studies for bioactive and non-bioactive adsorbers provided comparisons between their performances, and illustrated the advantages of bioregeneration for enhancing their effective service life spans. Isolation of microbial species revealed that fungal strains were more efficient than bacterial strains in metabolizing alachlor. Microbial degradation pathways for alachlor were proposed and confirmed by the detection of biotransformation metabolites and byproducts using gas chromatography/mass spectrometry.
High quantum efficiency photocathode simulation for the investigation of novel structured designs
MacPhee, A. G.; Nagel, S. R.; Bell, P. M.; ...
2014-09-02
A computer model in CST Studio Suite has been developed to evaluate several novel geometrically enhanced photocathode designs. This work was aimed at identifying a structure that would increase the total electron yield by a factor of two or greater in the 1–30 keV range. The modeling software was used to simulate the electric field and generate particle tracking for several potential structures. The final photocathode structure has been tailored to meet a set of detector performance requirements, namely, a spatial resolution of <40 μm and a temporal spread of 1–10 ps. As a result, we present the details ofmore » the geometrically enhanced photocathode model and resulting static field and electron emission characteristics.« less
NASA Technical Reports Server (NTRS)
1994-01-01
Charge Coupled Devices (CCDs) are high technology silicon chips that connect light directly into electronic or digital images, which can be manipulated or enhanced by computers. When Goddard Space Flight Center (GSFC) scientists realized that existing CCD technology could not meet scientific requirements for the Hubble Space Telescope Imagining Spectrograph, GSFC contracted with Scientific Imaging Technologies, Inc. (SITe) to develop an advanced CCD. SITe then applied many of the NASA-driven enhancements to the manufacture of CCDs for digital mammography. The resulting device images breast tissue more clearly and efficiently. The LORAD Stereo Guide Breast Biopsy system incorporates SITe's CCD as part of a digital camera system that is replacing surgical biopsy in many cases. Known as stereotactic needle biopsy, it is performed under local anesthesia with a needle and saves women time, pain, scarring, radiation exposure and money.
The emergence of spatial cyberinfrastructure.
Wright, Dawn J; Wang, Shaowen
2011-04-05
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge.
The emergence of spatial cyberinfrastructure
Wright, Dawn J.; Wang, Shaowen
2011-01-01
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge. PMID:21467227
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
A computable expression of closure to efficient causation.
Mossio, Matteo; Longo, Giuseppe; Stewart, John
2009-04-07
In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.
Numerical Simulation of Flow Through an Artificial Heart
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Kutler, Paul; Kwak, Dochan; Kiris, Cetin
1989-01-01
A solution procedure was developed that solves the unsteady, incompressible Navier-Stokes equations, and was used to numerically simulate viscous incompressible flow through a model of the Pennsylvania State artificial heart. The solution algorithm is based on the artificial compressibility method, and uses flux-difference splitting to upwind the convective terms; a line-relaxation scheme is used to solve the equations. The time-accuracy of the method is obtained by iteratively solving the equations at each physical time step. The artificial heart geometry involves a piston-type action with a moving solid wall. A single H-grid is fit inside the heart chamber. The grid is continuously compressed and expanded with a constant number of grid points to accommodate the moving piston. The computational domain ends at the valve openings where nonreflective boundary conditions based on the method of characteristics are applied. Although a number of simplifing assumptions were made regarding the geometry, the computational results agreed reasonably well with an experimental picture. The computer time requirements for this flow simulation, however, are quite extensive. Computational study of this type of geometry would benefit greatly from improvements in computer hardware speed and algorithm efficiency enhancements.
Mpeg2 codec HD improvements with medical and robotic imaging benefits
NASA Astrophysics Data System (ADS)
Picard, Wayne F. J.
2010-02-01
In this report, we propose an efficient scheme to use High Definition Television (HDTV) in a console or notebook format as a computer terminal in addition to their role as TV display unit. In the proposed scheme, we assume that the main computer is situated at a remote location. The computer raster in the remote server is compressed using an HD E- >Mpeg2 encoder and transmitted to the terminal at home. The built-in E->Mpeg2 decoder in the terminal decompresses the compressed bit stream, and displays the raster. The terminal will be fitted with a mouse and keyboard, through which the interaction with the remote computer server can be performed via a communications back channel. The terminal in a notebook format can thus be used as a high resolution computer and multimedia device. We will consider developments such as the required HD enhanced Mpeg2 resolution (E->Mpeg2) and its medical ramifications due to improvements on compressed image quality with 2D to 3D conversion (Mpeg3) and using the compressed Discrete Cosine Transform coefficients in the reality compression of vision and control of medical robotic surgeons.
Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing
NASA Astrophysics Data System (ADS)
Kumar, Suhas; Strachan, John Paul; Williams, R. Stanley
2017-08-01
At present, machine learning systems use simplified neuron models that lack the rich nonlinear phenomena observed in biological systems, which display spatio-temporal cooperative dynamics. There is evidence that neurons operate in a regime called the edge of chaos that may be central to complexity, learning efficiency, adaptability and analogue (non-Boolean) computation in brains. Neural networks have exhibited enhanced computational complexity when operated at the edge of chaos, and networks of chaotic elements have been proposed for solving combinatorial or global optimization problems. Thus, a source of controllable chaotic behaviour that can be incorporated into a neural-inspired circuit may be an essential component of future computational systems. Such chaotic elements have been simulated using elaborate transistor circuits that simulate known equations of chaos, but an experimental realization of chaotic dynamics from a single scalable electronic device has been lacking. Here we describe niobium dioxide (NbO2) Mott memristors each less than 100 nanometres across that exhibit both a nonlinear-transport-driven current-controlled negative differential resistance and a Mott-transition-driven temperature-controlled negative differential resistance. Mott materials have a temperature-dependent metal-insulator transition that acts as an electronic switch, which introduces a history-dependent resistance into the device. We incorporate these memristors into a relaxation oscillator and observe a tunable range of periodic and chaotic self-oscillations. We show that the nonlinear current transport coupled with thermal fluctuations at the nanoscale generates chaotic oscillations. Such memristors could be useful in certain types of neural-inspired computation by introducing a pseudo-random signal that prevents global synchronization and could also assist in finding a global minimum during a constrained search. We specifically demonstrate that incorporating such memristors into the hardware of a Hopfield computing network can greatly improve the efficiency and accuracy of converging to a solution for computationally difficult problems.
Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning
2018-03-19
Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mollica, Luca; Conti, Gianluca; Pollegioni, Loredano; Cavalli, Andrea; Rosini, Elena
2015-10-26
The industrial production of higher-generation semisynthetic cephalosporins starts from 7-aminocephalosporanic acid (7-ACA), which is obtained by deacylation of the naturally occurring antibiotic cephalosporin C (CephC). The enzymatic process in which CephC is directly converted into 7-ACA by a cephalosporin C acylase has attracted industrial interest because of the prospects of simplifying the process and reducing costs. We recently enhanced the catalytic efficiency on CephC of a glutaryl acylase from Pseudomonas N176 (named VAC) by a protein engineering approach and solved the crystal structures of wild-type VAC and the H57βS-H70βS VAC double variant. In the present work, experimental measurements on several CephC derivatives and six VAC variants were carried out, and the binding of ligands into the VAC active site was investigated at an atomistic level by means of molecular docking and molecular dynamics simulations and analyzed on the basis of the molecular geometry of encounter complex formation and protein-ligand potential of mean force profiles. The observed significant correlation between the experimental data and estimated binding energies highlights the predictive power of our computational method to identify the ligand binding mode. The present experimental-computational study is well-suited both to provide deep insight into the reaction mechanism of cephalosporin C acylase and to improve the efficiency of the corresponding industrial process.
Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Lu, Wenkai
2017-12-01
Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang, Tuan L.; Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, CA 94550; Marian, Jaime, E-mail: jmarian@ucla.edu
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a proceduremore » for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.« less
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Gedeon, David; Wood, Gary; McLean, Jeffrey
2009-01-01
Under Phase III of NASA Research Announcement contract NAS3-03124, a prototype nickel segmented-involute-foil regenerator was microfabricated and tested in a Sunpower Frequency-Test-Bed (FTB) Stirling convertor. The team for this effort consisted of Cleveland State University, Gedeon Associates, Sunpower Inc. and International Mezzo Technologies. Testing in the FTB convertor produced about the same efficiency as testing with the original random-fiber regenerator. But the high thermal conductivity of the prototype nickel regenerator was responsible for a significant performance degradation. An efficiency improvement (by a 1.04 factor, according to computer predictions) could have been achieved if the regenerator was made from a low-conductivity material. Also, the FTB convertor was not reoptimized to take full advantage of the microfabricated regenerator s low flow resistance; thus, the efficiency would likely have been even higher had the FTB been completely reoptimized. This report discusses the regenerator microfabrication process, testing of the regenerator in the Stirling FTB convertor, and the supporting analysis. Results of the pre-test computational fluid dynamics (CFD) modeling of the effects of the regenerator-test-configuration diffusers (located at each end of the regenerator) are included. The report also includes recommendations for further development of involute-foil regenerators from a higher-temperature material than nickel.
NASA Astrophysics Data System (ADS)
Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe3+, He+ and H+) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.
Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun
2015-01-01
The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.
Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements
NASA Astrophysics Data System (ADS)
Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego
2018-07-01
In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
An Efficient Mutual Authentication Framework for Healthcare System in Cloud Computing.
Kumar, Vinod; Jangirala, Srinivas; Ahmad, Musheer
2018-06-28
The increasing role of Telecare Medicine Information Systems (TMIS) makes its accessibility for patients to explore medical treatment, accumulate and approach medical data through internet connectivity. Security and privacy preservation is necessary for medical data of the patient in TMIS because of the very perceptive purpose. Recently, Mohit et al.'s proposed a mutual authentication protocol for TMIS in the cloud computing environment. In this work, we reviewed their protocol and found that it is not secure against stolen verifier attack, many logged in patient attack, patient anonymity, impersonation attack, and fails to protect session key. For enhancement of security level, we proposed a new mutual authentication protocol for the similar environment. The presented framework is also more capable in terms of computation cost. In addition, the security evaluation of the protocol protects resilience of all possible security attributes, and we also explored formal security evaluation based on random oracle model. The performance of the proposed protocol is much better in comparison to the existing protocol.
Elimination sequence optimization for SPAR
NASA Technical Reports Server (NTRS)
Hogan, Harry A.
1986-01-01
SPAR is a large-scale computer program for finite element structural analysis. The program allows user specification of the order in which the joints of a structure are to be eliminated since this order can have significant influence over solution performance, in terms of both storage requirements and computer time. An efficient elimination sequence can improve performance by over 50% for some problems. Obtaining such sequences, however, requires the expertise of an experienced user and can take hours of tedious effort to affect. Thus, an automatic elimination sequence optimizer would enhance productivity by reducing the analysts' problem definition time and by lowering computer costs. Two possible methods for automating the elimination sequence specifications were examined. Several algorithms based on the graph theory representations of sparse matrices were studied with mixed results. Significant improvement in the program performance was achieved, but sequencing by an experienced user still yields substantially better results. The initial results provide encouraging evidence that the potential benefits of such an automatic sequencer would be well worth the effort.
Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm
NASA Astrophysics Data System (ADS)
Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter
2004-05-01
The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
A practical approach to superresolution
NASA Astrophysics Data System (ADS)
Farsiu, Sina; Elad, Michael; Milanfar, Peyman
2006-01-01
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. Super-Resolution (SR) methods are developed through the years to go beyond this limit by acquiring and fusing several low-resolution (LR) images of the same scene, producing a high-resolution (HR) image. The early works on SR, although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. In this paper, we discuss two of the main issues related to designing a practical SR system, namely reconstruction accuracy and computational efficiency. Reconstruction accuracy refers to the problem of designing a robust SR method applicable to images from different imaging systems. We study a general framework for optimal reconstruction of images from grayscale, color, or color filtered (CFA) cameras. The performance of our proposed method is boosted by using powerful priors and is robust to both measurement (e.g. CCD read out noise) and system noise (e.g. motion estimation error). Noting that the motion estimation is often considered a bottleneck in terms of SR performance, we introduce the concept of "constrained motions" for enhancing the quality of super-resolved images. We show that using such constraints will enhance the quality of the motion estimation and therefore results in more accurate reconstruction of the HR images. We also justify some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. We use efficient approximation of the Kalman Filter (KF) and adopt a dynamic point of view to the SR problem. Novel methods for addressing these issues are accompanied by experimental results on real data.
NASA Astrophysics Data System (ADS)
Feng, Bing
Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.
Bornik, Alexander; Urschler, Martin; Schmalstieg, Dieter; Bischof, Horst; Krauskopf, Astrid; Schwark, Thorsten; Scheurer, Eva; Yen, Kathrin
2018-06-01
Three-dimensional (3D) crime scene documentation using 3D scanners and medical imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) are increasingly applied in forensic casework. Together with digital photography, these modalities enable comprehensive and non-invasive recording of forensically relevant information regarding injuries/pathologies inside the body and on its surface. Furthermore, it is possible to capture traces and items at crime scenes. Such digitally secured evidence has the potential to similarly increase case understanding by forensic experts and non-experts in court. Unlike photographs and 3D surface models, images from CT and MRI are not self-explanatory. Their interpretation and understanding requires radiological knowledge. Findings in tomography data must not only be revealed, but should also be jointly studied with all the 2D and 3D data available in order to clarify spatial interrelations and to optimally exploit the data at hand. This is technically challenging due to the heterogeneous data representations including volumetric data, polygonal 3D models, and images. This paper presents a novel computer-aided forensic toolbox providing tools to support the analysis, documentation, annotation, and illustration of forensic cases using heterogeneous digital data. Conjoint visualization of data from different modalities in their native form and efficient tools to visually extract and emphasize findings help experts to reveal unrecognized correlations and thereby enhance their case understanding. Moreover, the 3D case illustrations created for case analysis represent an efficient means to convey the insights gained from case analysis to forensic non-experts involved in court proceedings like jurists and laymen. The capability of the presented approach in the context of case analysis, its potential to speed up legal procedures and to ultimately enhance legal certainty is demonstrated by introducing a number of representative forensic cases. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
Iyer, Rajiv R; Wu, Adela; Macmillan, Alexandra; Musavi, Leila; Cho, Regina; Lopez, Joseph; Jallo, George I; Dorafshar, Amir H; Ahn, Edward S
2018-01-01
Cranial vault remodeling surgery for craniosynostosis carries the potential risk of dural venous sinus injury given the extensive bony exposure. Identification of the dural venous sinuses can be challenging in patients with craniosynostosis given the lack of accurate surface-localizing landmarks. Computer-aided design and manufacturing (CAD/CAM) has allowed surgeons to pre-operatively plan these complex procedures in an effort to increase reconstructive efficiency. An added benefit of this technology is the ability to intraoperatively map the dural venous sinuses based on pre-operative imaging. We utilized CAD/CAM technology to intraoperatively map the dural venous sinuses for patients undergoing reconstructive surgery for craniosynostosis in an effort to prevent sinus injury, increase operative efficiency, and enhance patient safety. Here, we describe our experience utilizing this intraoperative technology in pediatric patients with craniosynostosis. We retrospectively reviewed the charts of children undergoing reconstructive surgery for craniosynostosis using CAD/CAM surgical planning guides at our institution between 2012 and 2016. Data collected included the following: age, gender, type of craniosynostosis, estimated blood loss, sagittal sinus deviation from the sagittal suture, peri-operative outcomes, and hospital length of stay. Thirty-two patients underwent reconstructive cranial surgery for craniosynostosis, with a median age of 11 months (range, 7-160). Types of synostosis included metopic (6), unicoronal (6), sagittal (15), lambdoid (1), and multiple suture (4). Sagittal sinus deviation from the sagittal suture was maximal in unicoronal synostosis patients (10.2 ± 0.9 mm). All patients tolerated surgery well, and there were no occurrences of sagittal sinus, transverse sinus, or torcular injury. The use of CAD/CAM technology allows for accurate intraoperative dural venous sinus localization during reconstructive surgery for craniosynostosis and enhances operative efficiency and surgeon confidence while minimizing the risk of patient morbidity.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
NASA Astrophysics Data System (ADS)
Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao
2018-02-01
An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.
Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds
NASA Astrophysics Data System (ADS)
Cheng, Tian
Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A fast Fourier transform (FFT) method is presented to avoid the root-searching process in the inverse Laplace transform of multilayered walls. Generalized explicit FFT formulae for calculating the discrete Fourier transform (DFT) are developed for the first time. They can largely facilitate the implementation of FFT. The new method also provides a basis for generating the symbolic response factors. Validation simulations show that it can generate the response factors as accurate as the analytical solutions. The second method is for direct estimation of annual or seasonal cooling loads without the need for tedious hourly energy simulations. It is validated by hourly simulation results with DOE2. Then symbolic long-term cooling load can be created by combining the two methods with thermal network analysis. The symbolic long-term cooling load can keep the design parameters of interest as symbols, which is particularly useful for the optimal design and sensitivity analysis. The methodology is applied to an office building in Hong Kong for the optimal design of building envelope. Design variables such as window-to-wall ratio, building orientation, and glazing optical and thermal properties are included in the study. Results show that the selected design values could significantly impact the energy performance of windows, and the optimal design of side-lit buildings could greatly enhance energy savings. The application example also demonstrates that the developed methodology significantly facilitates the optimal building design and sensitivity analysis, and leads to high computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Computer-assisted photogrammetric mapping systems for geologic studies-A progress report
Pillmore, C.L.; Dueholm, K.S.; Jepsen, H.S.; Schuch, C.H.
1981-01-01
Photogrammetry has played an important role in geologic mapping for many years; however, only recently have attempts been made to automate mapping functions for geology. Computer-assisted photogrammetric mapping systems for geologic studies have been developed and are currently in use in offices of the Geological Survey of Greenland at Copenhagen, Denmark, and the U.S. Geological Survey at Denver, Colorado. Though differing somewhat, the systems are similar in that they integrate Kern PG-2 photogrammetric plotting instruments and small desk-top computers that are programmed to perform special geologic functions and operate flat-bed plotters by means of specially designed hardware and software. A z-drive capability, in which stepping motors control the z-motions of the PG-2 plotters, is an integral part of both systems. This feature enables the computer to automatically position the floating mark on computer-calculated, previously defined geologic planes, such as contacts or the base of coal beds, throughout the stereoscopic model in order to improve the mapping capabilities of the instrument and to aid in correlation and tracing of geologic units. The common goal is to enhance the capabilities of the PG-2 plotter and provide a means by which geologists can make conventional geologic maps more efficiently and explore ways to apply computer technology to geologic studies. ?? 1981.
Chi, Chia-Fen; Tseng, Li-Kai; Jang, Yuh
2012-07-01
Many disabled individuals lack extensive knowledge about assistive technology, which could help them use computers. In 1997, Denis Anson developed a decision tree of 49 evaluative questions designed to evaluate the functional capabilities of the disabled user and choose an appropriate combination of assistive devices, from a selection of 26, that enable the individual to use a computer. In general, occupational therapists guide the disabled users through this process. They often have to go over repetitive questions in order to find an appropriate device. A disabled user may require an alphanumeric entry device, a pointing device, an output device, a performance enhancement device, or some combination of these. Therefore, the current research eliminates redundant questions and divides Anson's decision tree into multiple independent subtrees to meet the actual demand of computer users with disabilities. The modified decision tree was tested by six disabled users to prove it can determine a complete set of assistive devices with a smaller number of evaluative questions. The means to insert new categories of computer-related assistive devices was included to ensure the decision tree can be expanded and updated. The current decision tree can help the disabled users and assistive technology practitioners to find appropriate computer-related assistive devices that meet with clients' individual needs in an efficient manner.
The thermodynamic efficiency of computations made in cells across the range of life
NASA Astrophysics Data System (ADS)
Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan
2017-11-01
Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Enhanced Thermostability of Glucose Oxidase through Computer-Aided Molecular Design.
Ning, Xiaoyan; Zhang, Yanli; Yuan, Tiantian; Li, Qingbin; Tian, Jian; Guan, Weishi; Liu, Bo; Zhang, Wei; Xu, Xinxin; Zhang, Yuhong
2018-01-31
Glucose oxidase (GOD, EC.1.1.3.4) specifically catalyzes the reaction of β-d-glucose to gluconic acid and hydrogen peroxide in the presence of oxygen, which has become widely used in the food industry, gluconic acid production and the feed industry. However, the poor thermostability of the current commercial GOD is a key limiting factor preventing its widespread application. In the present study, amino acids closely related to the thermostability of glucose oxidase from Penicillium notatum were predicted with a computer-aided molecular simulation analysis, and mutant libraries were established following a saturation mutagenesis strategy. Two mutants with significantly improved thermostabilities, S100A and D408W, were subsequently obtained. Their protein denaturing temperatures were enhanced by about 4.4 °C and 1.2 °C, respectively, compared with the wild-type enzyme. Treated at 55 °C for 3 h, the residual activities of the mutants were greater than 72%, while that of the wild-type enzyme was only 20%. The half-lives of S100A and D408W were 5.13- and 4.41-fold greater, respectively, than that of the wild-type enzyme at the same temperature. This work provides novel and efficient approaches for enhancing the thermostability of GOD by reducing the protein free unfolding energy or increasing the interaction of amino acids with the coenzyme.
Enhanced Thermostability of Glucose Oxidase through Computer-Aided Molecular Design
Ning, Xiaoyan; Zhang, Yanli; Yuan, Tiantian; Li, Qingbin; Tian, Jian; Guan, Weishi; Liu, Bo; Zhang, Wei; Xu, Xinxin
2018-01-01
Glucose oxidase (GOD, EC.1.1.3.4) specifically catalyzes the reaction of β-d-glucose to gluconic acid and hydrogen peroxide in the presence of oxygen, which has become widely used in the food industry, gluconic acid production and the feed industry. However, the poor thermostability of the current commercial GOD is a key limiting factor preventing its widespread application. In the present study, amino acids closely related to the thermostability of glucose oxidase from Penicillium notatum were predicted with a computer-aided molecular simulation analysis, and mutant libraries were established following a saturation mutagenesis strategy. Two mutants with significantly improved thermostabilities, S100A and D408W, were subsequently obtained. Their protein denaturing temperatures were enhanced by about 4.4 °C and 1.2 °C, respectively, compared with the wild-type enzyme. Treated at 55 °C for 3 h, the residual activities of the mutants were greater than 72%, while that of the wild-type enzyme was only 20%. The half-lives of S100A and D408W were 5.13- and 4.41-fold greater, respectively, than that of the wild-type enzyme at the same temperature. This work provides novel and efficient approaches for enhancing the thermostability of GOD by reducing the protein free unfolding energy or increasing the interaction of amino acids with the coenzyme. PMID:29385094
Enhanced Software for Scheduling Space-Shuttle Processing
NASA Technical Reports Server (NTRS)
Barretta, Joseph A.; Johnson, Earl P.; Bierman, Rocky R.; Blanco, Juan; Boaz, Kathleen; Stotz, Lisa A.; Clark, Michael; Lebovitz, George; Lotti, Kenneth J.; Moody, James M.;
2004-01-01
The Ground Processing Scheduling System (GPSS) computer program is used to develop streamlined schedules for the inspection, repair, and refurbishment of space shuttles at Kennedy Space Center. A scheduling computer program is needed because space-shuttle processing is complex and it is frequently necessary to modify schedules to accommodate unanticipated events, unavailability of specialized personnel, unexpected delays, and the need to repair newly discovered defects. GPSS implements constraint-based scheduling algorithms and provides an interactive scheduling software environment. In response to inputs, GPSS can respond with schedules that are optimized in the sense that they contain minimal violations of constraints while supporting the most effective and efficient utilization of space-shuttle ground processing resources. The present version of GPSS is a product of re-engineering of a prototype version. While the prototype version proved to be valuable and versatile as a scheduling software tool during the first five years, it was characterized by design and algorithmic deficiencies that affected schedule revisions, query capability, task movement, report capability, and overall interface complexity. In addition, the lack of documentation gave rise to difficulties in maintenance and limited both enhanceability and portability. The goal of the GPSS re-engineering project was to upgrade the prototype into a flexible system that supports multiple- flow, multiple-site scheduling and that retains the strengths of the prototype while incorporating improvements in maintainability, enhanceability, and portability.
NASA Astrophysics Data System (ADS)
Li, W.; Shao, H.
2017-12-01
For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223
NASA Technical Reports Server (NTRS)
1979-01-01
The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.
NASA Astrophysics Data System (ADS)
Chen, X.; Qin, G.; Ai, Z.; Ji, Y.
2017-08-01
As an effective and economic method for flow range enhancement, circumferential groove casing treatment (CGCT) is widely used to increase the stall margin of compressors. Different from traditional grooved casing treatments, in which the grooves are always located over the rotor in both axial and radial compressors, one or several circumferential grooves are located along the shroud side of the diffuser passage in this paper. Numerical investigations were conducted to predict the performance of a low flow rate centrifugal compressor with CGCT in diffuser. Computational fluid dynamics (CFD) analysis is performed under stage environment in order to find the optimum location of the circumferential casing groove in consideration of stall margin enhancement and efficiency gain at design point, and the impact of groove number to the effect of this grooved casing treatment configuration in enhancing the stall margin of the compressor stage is studied. The results indicate that the centrifugal compressor with circumferential groove in vaned diffuser can obtain obvious improvement in the stall margin with sacrificing design efficiency a little. Efforts were made to study blade level flow mechanisms to determine how the CGCT impacts the compressor’s stall margin (SM) and performance. The flow structures in the passage, the tip gap, and the grooves as well as their mutual interactions were plotted and analysed.
Defined Host-Guest Chemistry on Nanocarbon for Sustained Inhibition of Cancer.
Ostadhossein, Fatemeh; Misra, Santosh K; Mukherjee, Prabuddha; Ostadhossein, Alireza; Daza, Enrique; Tiwari, Saumya; Mittal, Shachi; Gryka, Mark C; Bhargava, Rohit; Pan, Dipanjan
2016-08-22
Signal transducer and activator of transcription factor 3 (STAT-3) is known to be overexpressed in cancer stem cells. Poor solubility and variable drug absorption are linked to low bioavailability and decreased efficacy. Many of the drugs regulating STAT-3 expression lack aqueous solubility; hence hindering efficient bioavailability. A theranostics nanoplatform based on luminescent carbon particles decorated with cucurbit[6]uril is introduced for enhancing the solubility of niclosamide, a STAT-3 inhibitor. The host-guest chemistry between cucurbit[6]uril and niclosamide makes the delivery of the hydrophobic drug feasible while carbon nanoparticles enhance cellular internalization. Extensive physicochemical characterizations confirm successful synthesis. Subsequently, the host-guest chemistry of niclosamide and cucurbit[6]uril is studied experimentally and computationally. In vitro assessments in human breast cancer cells indicate approximately twofold enhancement in IC 50 of drug. Fourier transform infrared and fluorescence imaging demonstrate efficient cellular internalization. Furthermore, the catalytic biodegradation of the nanoplatforms occur upon exposure to human myeloperoxidase in short time. In vivo studies on athymic mice with MCF-7 xenograft indicate the size of tumor in the treatment group is half of the controls after 40 d. Immunohistochemistry corroborates the downregulation of STAT-3 phosphorylation. Overall, the host-guest chemistry on nanocarbon acts as a novel arsenal for STAT-3 inhibition. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Tian, Rui; Yan, Dongpeng; Li, Chunyang; Xu, Simin; Liang, Ruizheng; Guo, Lingyan; Wei, Min; Evans, David G.; Duan, Xue
2016-05-01
Gold nanoclusters (Au NCs) as ultrasmall fluorescent nanomaterials possess discrete electronic energy and unique physicochemical properties, but suffer from relatively low quantum yield (QY) which severely affects their application in displays and imaging. To solve this conundrum and obtain highly-efficient fluorescent emission, 2D exfoliated layered double hydroxide (ELDH) nanosheets were employed to localize Au NCs with a density as high as 5.44 × 1013 cm-2, by virtue of the surface confinement effect of ELDH. Both experimental studies and computational simulations testify that the excited electrons of Au NCs are strongly confined by MgAl-ELDH nanosheets, which results in a largely promoted QY as well as prolonged fluorescence lifetime (both ~7 times enhancement). In addition, the as-fabricated Au NC/ELDH hybrid material exhibits excellent imaging properties with good stability and biocompatibility in the intracellular environment. Therefore, this work provides a facile strategy to achieve highly luminescent Au NCs via surface-confined emission enhancement imposed by ultrathin inorganic nanosheets, which can be potentially used in bio-imaging and cell labelling.Gold nanoclusters (Au NCs) as ultrasmall fluorescent nanomaterials possess discrete electronic energy and unique physicochemical properties, but suffer from relatively low quantum yield (QY) which severely affects their application in displays and imaging. To solve this conundrum and obtain highly-efficient fluorescent emission, 2D exfoliated layered double hydroxide (ELDH) nanosheets were employed to localize Au NCs with a density as high as 5.44 × 1013 cm-2, by virtue of the surface confinement effect of ELDH. Both experimental studies and computational simulations testify that the excited electrons of Au NCs are strongly confined by MgAl-ELDH nanosheets, which results in a largely promoted QY as well as prolonged fluorescence lifetime (both ~7 times enhancement). In addition, the as-fabricated Au NC/ELDH hybrid material exhibits excellent imaging properties with good stability and biocompatibility in the intracellular environment. Therefore, this work provides a facile strategy to achieve highly luminescent Au NCs via surface-confined emission enhancement imposed by ultrathin inorganic nanosheets, which can be potentially used in bio-imaging and cell labelling. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr01624c
Scalable quantum information processing with atomic ensembles and flying photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei Feng; Yu Yafei; Feng Mang
2009-10-15
We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could muchmore » relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.« less
Basciano, C.; Kleinstreuer, C.; Hyun, S.; Finol, E. A.
2014-01-01
A novel computational particle-hemodynamics analysis of key criteria for the onset of an intraluminal thrombus (ILT) in a patient-specific abdominal aortic aneurysm (AAA) is presented. The focus is on enhanced platelet and white blood cell residence times as well as their elevated surface-shear loads in near-wall regions of the AAA sac. The generalized results support the hypothesis that a patient's AAA geometry and associated particle-hemodynamics have the potential to entrap activated blood particles, which will play a role in the onset of ILT. Although the ILT history of only a single patient was considered, the modeling and simulation methodology provided allow for the development of an efficient computational tool to predict the onset of ILT formation in complex patient-specific cases. PMID:21373952
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
Probabilistic Analysis of Gas Turbine Field Performance
NASA Technical Reports Server (NTRS)
Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.
2002-01-01
A gas turbine thermodynamic cycle was computationally simulated and probabilistically evaluated in view of the several uncertainties in the performance parameters, which are indices of gas turbine health. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design, enhance performance, increase system availability and make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in the gas turbine health determination and to the identification of both the most critical measurements and parameters. Probabilistic analysis aims at unifying and improving the control and health monitoring of gas turbine aero-engines by increasing the quality and quantity of information available about the engine's health and performance.
Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method
NASA Astrophysics Data System (ADS)
Klimczak, Marek; Cecot, Witold
2018-01-01
We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.
NASA Astrophysics Data System (ADS)
Guang, Chen; Qibo, Feng; Keqin, Ding; Zhan, Gao
2017-10-01
A subpixel displacement measurement method based on the combination of particle swarm optimization (PSO) and gradient algorithm (GA) was proposed for accuracy and speed optimization in GA, which is a subpixel displacement measurement method better applied in engineering practice. An initial integer-pixel value was obtained according to the global searching ability of PSO, and then gradient operators were adopted for a subpixel displacement search. A comparison was made between this method and GA by simulated speckle images and rigid-body displacement in metal specimens. The results showed that the computational accuracy of the combination of PSO and GA method reached 0.1 pixel in the simulated speckle images, or even 0.01 pixels in the metal specimen. Also, computational efficiency and the antinoise performance of the improved method were markedly enhanced.
NASA Astrophysics Data System (ADS)
Das, A.; Bang, H. S.; Bang, H. S.
2018-05-01
Multi-material combinations of aluminium alloy and carbon-fiber-reinforced-plastics (CFRP) have gained attention in automotive and aerospace industries to enhance fuel efficiency and strength-to-weight ratio of components. Various limitations of laser beam welding, adhesive bonding and mechanical fasteners make these processes inefficient to join metal and CFRP sheets. Friction lap joining is an alternative choice for the same. Comprehensive studies in friction lap joining of aluminium to CFRP sheets are essential and scare in the literature. The present work reports a combined theoretical and experimental study in joining of AA5052 and CFRP sheets using friction lap joining process. A three-dimensional finite element based heat transfer model is developed to compute the temperature fields and thermal cycles. The computed results are validated extensively with the corresponding experimentally measured results.
Experimental realization of entanglement in multiple degrees of freedom between two quantum memories
Zhang, Wei; Ding, Dong-Sheng; Dong, Ming-Xin; Shi, Shuai; Wang, Kai; Liu, Shi-Long; Li, Yan; Zhou, Zhi-Yuan; Shi, Bao-Sen; Guo, Guang-Can
2016-01-01
Entanglement in multiple degrees of freedom has many benefits over entanglement in a single one. The former enables quantum communication with higher channel capacity and more efficient quantum information processing and is compatible with diverse quantum networks. Establishing multi-degree-of-freedom entangled memories is not only vital for high-capacity quantum communication and computing, but also promising for enhanced violations of nonlocality in quantum systems. However, there have been yet no reports of the experimental realization of multi-degree-of-freedom entangled memories. Here we experimentally established hyper- and hybrid entanglement in multiple degrees of freedom, including path (K-vector) and orbital angular momentum, between two separated atomic ensembles by using quantum storage. The results are promising for achieving quantum communication and computing with many degrees of freedom. PMID:27841274
Global Contrast Based Salient Region Detection.
Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min
2015-03-01
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
Universal Linear Motor Driven Leg Press Dynamometer and Concept of Serial Stretch Loading.
Hamar, Dušan
2015-08-24
Paper deals with backgrounds and principles of universal linear motor driven leg press dynamometer and concept of serial stretch loading. The device is based on two computer controlled linear motors mounted to the horizontal rails. As the motors can keep either constant resistance force in selected position or velocity in both directions, the system allows simulation of any mode of muscle contraction. In addition, it also can generate defined serial stretch stimuli in a form of repeated force peaks. This is achieved by short segments of reversed velocity (in concentric phase) or acceleration (in eccentric phase). Such stimuli, generated at the rate of 10 Hz, have proven to be a more efficient means for the improvement of rate of the force development. This capability not only affects performance in many sports, but also plays a substantial role in prevention of falls and their consequences. Universal linear motor driven and computer controlled dynamometer with its unique feature to generate serial stretch stimuli seems to be an efficient and useful tool for enhancing strength training effects on neuromuscular function not only in athletes, but as well as in senior population and rehabilitation patients.
Minimalist Design of Allosterically Regulated Protein Catalysts.
Makhlynets, O V; Korendovych, I V
2016-01-01
Nature facilitates chemical transformations with exceptional selectivity and efficiency. Despite a tremendous progress in understanding and predicting protein function, the overall problem of designing a protein catalyst for a given chemical transformation is far from solved. Over the years, many design techniques with various degrees of complexity and rational input have been developed. Minimalist approach to protein design that focuses on the bare minimum requirements to achieve activity presents several important advantages. By focusing on basic physicochemical properties and strategic placing of only few highly active residues one can feasibly evaluate in silico a very large variety of possible catalysts. In more general terms minimalist approach looks for the mere possibility of catalysis, rather than trying to identify the most active catalyst possible. Even very basic designs that utilize a single residue introduced into nonenzymatic proteins or peptide bundles are surprisingly active. Because of the inherent simplicity of the minimalist approach computational tools greatly enhance its efficiency. No complex calculations need to be set up and even a beginner can master this technique in a very short time. Here, we present a step-by-step protocol for minimalist design of functional proteins using basic, easily available, and free computational tools. © 2016 Elsevier Inc. All rights reserved.
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.
Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao
2016-06-01
Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.
Automating tasks in protein structure determination with the clipper python module.
McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M; Hoh, Soon Wen; Jenkins, Huw T; Dodson, Eleanor; Cowtan, Kevin; Agirre, Jon
2018-01-01
Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine-independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo-microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
An efficient Cellular Potts Model algorithm that forbids cell fragmentation
NASA Astrophysics Data System (ADS)
Durand, Marc; Guesnet, Etienne
2016-11-01
The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.
Privacy Preserving Association Rule Mining Revisited: Privacy Enhancement and Resources Efficiency
NASA Astrophysics Data System (ADS)
Mohaisen, Abedelaziz; Jho, Nam-Su; Hong, Dowon; Nyang, Daehun
Privacy preserving association rule mining algorithms have been designed for discovering the relations between variables in data while maintaining the data privacy. In this article we revise one of the recently introduced schemes for association rule mining using fake transactions (FS). In particular, our analysis shows that the FS scheme has exhaustive storage and high computation requirements for guaranteeing a reasonable level of privacy. We introduce a realistic definition of privacy that benefits from the average case privacy and motivates the study of a weakness in the structure of FS by fake transactions filtering. In order to overcome this problem, we improve the FS scheme by presenting a hybrid scheme that considers both privacy and resources as two concurrent guidelines. Analytical and empirical results show the efficiency and applicability of our proposed scheme.
Multithreaded implicitly dealiased convolutions
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2018-03-01
Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.
MRT letter: Guided filtering of image focus volume for 3D shape recovery of microscopic objects.
Mahmood, Muhammad Tariq
2014-12-01
In this letter, a shape from focus (SFF) method is proposed that utilizes the guided image filtering to enhance the image focus volume efficiently. First, image focus volume is computed using a conventional focus measure. Then each layer of image focus volume is filtered using guided filtering. In this work, the all-in-focus image, which can be obtained from the initial focus volume, is used as guidance image. Finally, improved depth map is obtained from the filtered image focus volume by maximizing the focus measure along the optical axis. The proposed SFF method is efficient and provides better depth maps. The improved performance is highlighted by conducting several experiments using image sequences of simulated and real microscopic objects. The comparative analysis demonstrates the effectiveness of the proposed SFF method. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jha, B.; Juanes, R.
2015-12-01
Coupled processes of flow, transport, and deformation are important during production of hydrocarbons from oil and gas reservoirs. Effective design and implementation of enhanced recovery techniques such as miscible gas flooding and hydraulic fracturing requires modeling and simulation of these coupled proceses in geologic porous media. We develop a computational framework to model the coupled processes of flow, transport, and deformation in heterogeneous fractured rock. We show that the hydrocarbon recovery efficiency during unstable displacement of a more viscous oil with a less viscous fluid in a fractured medium depends on the mechanical state of the medium, which evolves due to permeability alteration within and around fractures. We show that fully accounting for the coupling between the physical processes results in estimates of the recovery efficiency in agreement with observations in field and lab experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shir, Daniel J.; Yoon, Jongseung; Chanda, Debashis
2010-08-11
Recently developed classes of monocrystalline silicon solar microcells can be assembled into modules with characteristics (i.e., mechanically flexible forms, compact concentrator designs, and high-voltage outputs) that would be impossible to achieve using conventional, wafer-based approaches. This paper presents experimental and computational studies of the optics of light absorption in ultrathin microcells that include nanoscale features of relief on their surfaces, formed by soft imprint lithography. Measurements on working devices with designs optimized for broad band trapping of incident light indicate good efficiencies in energy production even at thicknesses of just a few micrometers. These outcomes are relevant not only tomore » the microcell technology described here but also to other photovoltaic systems that benefit from thin construction and efficient materials utilization.« less
A new method for enhancer prediction based on deep belief network.
Bu, Hongda; Gan, Yanglan; Wang, Yang; Zhou, Shuigeng; Guan, Jihong
2017-10-16
Studies have shown that enhancers are significant regulatory elements to play crucial roles in gene expression regulation. Since enhancers are unrelated to the orientation and distance to their target genes, it is a challenging mission for scholars and researchers to accurately predicting distal enhancers. In the past years, with the high-throughout ChiP-seq technologies development, several computational techniques emerge to predict enhancers using epigenetic or genomic features. Nevertheless, the inconsistency of computational models across different cell-lines and the unsatisfactory prediction performance call for further research in this area. Here, we propose a new Deep Belief Network (DBN) based computational method for enhancer prediction, which is called EnhancerDBN. This method combines diverse features, composed of DNA sequence compositional features, DNA methylation and histone modifications. Our computational results indicate that 1) EnhancerDBN outperforms 13 existing methods in prediction, and 2) GC content and DNA methylation can serve as relevant features for enhancer prediction. Deep learning is effective in boosting the performance of enhancer prediction.
Schmitz, Ulf; Lai, Xin; Winter, Felix; Wolkenhauer, Olaf; Vera, Julio; Gupta, Shailendra K.
2014-01-01
MicroRNAs (miRNAs) are an integral part of gene regulation at the post-transcriptional level. Recently, it has been shown that pairs of miRNAs can repress the translation of a target mRNA in a cooperative manner, which leads to an enhanced effectiveness and specificity in target repression. However, it remains unclear which miRNA pairs can synergize and which genes are target of cooperative miRNA regulation. In this paper, we present a computational workflow for the prediction and analysis of cooperating miRNAs and their mutual target genes, which we refer to as RNA triplexes. The workflow integrates methods of miRNA target prediction; triplex structure analysis; molecular dynamics simulations and mathematical modeling for a reliable prediction of functional RNA triplexes and target repression efficiency. In a case study we analyzed the human genome and identified several thousand targets of cooperative gene regulation. Our results suggest that miRNA cooperativity is a frequent mechanism for an enhanced target repression by pairs of miRNAs facilitating distinctive and fine-tuned target gene expression patterns. Human RNA triplexes predicted and characterized in this study are organized in a web resource at www.sbi.uni-rostock.de/triplexrna/. PMID:24875477
Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Wei
Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equationsmore » such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.« less
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
Iannaccone, Reto; Brem, Silvia; Walitza, Susanne
2017-01-01
Patients with obsessive-compulsive disorder (OCD) can be described as cautious and hesitant, manifesting an excessive indecisiveness that hinders efficient decision making. However, excess caution in decision making may also lead to better performance in specific situations where the cost of extended deliberation is small. We compared 16 juvenile OCD patients with 16 matched healthy controls whilst they performed a sequential information gathering task under different external cost conditions. We found that patients with OCD outperformed healthy controls, winning significantly more points. The groups also differed in the number of draws required prior to committing to a decision, but not in decision accuracy. A novel Bayesian computational model revealed that subjective sampling costs arose as a non-linear function of sampling, closely resembling an escalating urgency signal. Group difference in performance was best explained by a later emergence of these subjective costs in the OCD group, also evident in an increased decision threshold. Our findings present a novel computational model and suggest that enhanced information gathering in OCD can be accounted for by a higher decision threshold arising out of an altered perception of costs that, in some specific contexts, may be advantageous. PMID:28403139
The technological obsolescence of the Brazilian eletronic ballot box.
Camargo, Carlos Rogério; Faust, Richard; Merino, Eugênio; Stefani, Clarissa
2012-01-01
The electronic ballot box has played a significant role in the consolidation of Brazilian political process. It has enabled paper ballots extinction as a support for the elector's vote as well as for voting counting processes. It is also widely known that election automation has decisively collaborated to the legitimization of Brazilian democracy, getting rid of doubts about the winning candidates. In 1995, when the project was conceived, it represented a compromise solution, balancing technical efficiency and costs trade-offs. However, this architecture currently limits the ergonomic enhancements to the device operation, transportation, maintenance and storage. Nowadays are available in the market devices of reduced dimensions, based on novel computational architecture, namely tablet computers, which emphasizes usability, autonomy, portability, security and low power consumption. Therefore, the proposal under discussion is the replacement of the current electronic ballot boxes for tablet-based devices to improve the ergonomics aspects of the Brazilian voting process. These devices offer a plethora of integrated features (e.g., capacitive touchscreen, speakers, microphone) that enable highly usable and simple user interfaces, in addition to enhancing the voting process security mechanisms. Finally, their operational systems features allow for the development of highly secure applications, suitable to the requirements of a voting process.
Modeling RF-induced Plasma-Surface Interactions with VSim
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Smithe, David N.; Pankin, Alexei Y.; Roark, Christine M.; Stoltz, Peter H.; Zhou, Sean C.-D.; Kruger, Scott E.
2014-10-01
An overview of ongoing enhancements to the Plasma Discharge (PD) module of Tech-X's VSim software tool is presented. A sub-grid kinetic sheath model, developed for the accurate computation of sheath potentials near metal and dielectric-coated walls, enables the physical effects of DC and RF sheath dynamics to be included in macroscopic-scale plasma simulations that need not explicitly resolve sheath scale lengths. Sheath potential evolution, together with particle behavior near the sheath (e.g. sputtering), can thus be simulated in complex, experimentally relevant geometries. Simulations of RF sheath-enhanced impurity production near surfaces of the C-Mod field-aligned ICRF antenna are presented to illustrate the model; impurity mitigation techniques are also explored. Model extensions to capture the physics of secondary electron emission and of multispecies plasmas are summarized, together with a discussion of improved tools for plasma chemistry and IEDF/EEDF visualization and modeling. The latter tools are also highly relevant for commercial plasma processing applications. Ultimately, we aim to establish VSimPD as a robust, efficient computational tool for modeling fusion and industrial plasma processes. Supported by U.S. DoE SBIR Phase I/II Award DE-SC0009501.
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.; ...
2017-11-23
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Harnessing the genetics of the modern dairy cow to continue improvements in feed efficiency.
VandeHaar, M J; Armentano, L E; Weigel, K; Spurlock, D M; Tempelman, R J; Veerkamp, R
2016-06-01
Feed efficiency, as defined by the fraction of feed energy or dry matter captured in products, has more than doubled for the US dairy industry in the past 100 yr. This increased feed efficiency was the result of increased milk production per cow achieved through genetic selection, nutrition, and management with the desired goal being greater profitability. With increased milk production per cow, more feed is consumed per cow, but a greater portion of the feed is partitioned toward milk instead of maintenance and body growth. This dilution of maintenance has been the overwhelming driver of enhanced feed efficiency in the past, but its effect diminishes with each successive increment in production relative to body size and therefore will be less important in the future. Instead, we must also focus on new ways to enhance digestive and metabolic efficiency. One way to examine variation in efficiency among animals is residual feed intake (RFI), a measure of efficiency that is independent of the dilution of maintenance. Cows that convert feed gross energy to net energy more efficiently or have lower maintenance requirements than expected based on body weight use less feed than expected and thus have negative RFI. Cows with low RFI likely digest and metabolize nutrients more efficiently and should have overall greater efficiency and profitability if they are also healthy, fertile, and produce at a high multiple of maintenance. Genomic technologies will help to identify these animals for selection programs. Nutrition and management also will continue to play a major role in farm-level feed efficiency. Management practices such as grouping and total mixed ration feeding have improved rumen function and therefore efficiency, but they have also decreased our attention on individual cow needs. Nutritional grouping is key to helping each cow reach its genetic potential. Perhaps new computer-driven technologies, combined with genomics, will enable us to optimize management for each individual cow within a herd, or to optimize animal selection to match management environments. In the future, availability of feed resources may shift as competition for land increases. New approaches combining genetic, nutrition, and other management practices will help optimize feed efficiency, profitability, and environmental sustainability. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.
Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie
2018-06-12
Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.
A new OTDR based on probe frequency multiplexing
NASA Astrophysics Data System (ADS)
Lu, Lidong; Liang, Yun; Li, Binglin; Guo, Jinghong; Zhang, Xuping
2013-12-01
Two signal multiplexing methods are proposed and experimentally demonstrated in optical time domain reflectometry (OTDR) for fault location of optical fiber transmission line to obtain high measurement efficiency. Probe signal multiplexing is individually obtained by phase modulation for generation of multi-frequency and time sequential frequency probe pulses. The backscattered Rayleigh light of the multiplexing probe signals is transferred to corresponding heterodyne intermediate frequency (IF) through heterodyning with the single frequency local oscillator (LO). Then the IFs are simultaneously acquired by use of a data acquisition card (DAQ) with sampling rate of 100Msps, and the obtained data are processed by digital band pass filtering (BPF), digital down conversion (DDC) and digital low pass filtering (BPF) procedure. For each probe frequency of the detected signals, the extraction of the time domain reflecting signal power is performed by parallel computing method. For a comprehensive performance comparison with conventional coherent OTDR on the probe frequency multiplexing methods, the potential for enhancement of dynamic range, spatial resolution and measurement time are analyzed and discussed. Experimental results show that by use of the probe frequency multiplexing method, the measurement efficiency of coherent OTDR can be enhanced by nearly 40 times.