Analytical Cost Metrics : Days of Future Past
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less
Function-Based Algorithms for Biological Sequences
ERIC Educational Resources Information Center
Mohanty, Pragyan Sheela P.
2015-01-01
Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…
43 CFR 418.28 - Conditions of delivery.
Code of Federal Regulations, 2010 CFR
2010-10-01
... particulars including the known or estimated location and amounts; (3) The amount will not be included as a valid headgate delivery for purposes of computing the Project efficiency and resultant incentive credit... treated directly as a debit to Lahontan storage in the same manner as an efficiency debit. (b) District...
Transportation planning and ITS : putting the pieces together
DOT National Transportation Integrated Search
1998-01-01
Intelligent Transportation Systems (ITS) include the application of computer, electronics, and communications technologies and management strategies -- in an integrated manner -- providing traveler information to increase the safety and efficiency of...
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
NASA Technical Reports Server (NTRS)
Knoell, A. C.
1972-01-01
Computer program has been specifically developed to handle, in an efficient and cost effective manner, planar wound pressure vessels fabricated of either boron-epoxy or graphite-epoxy advanced composite materials.
LIVING SHORES GALLERY MX964015
An interactive computer kiosk will allow the Texas State Aquarium to deliver a considerable amount of information in an efficient and highly effective manner. Touch screen interactives have proven to be excellent teaching tools in the Aquarium's Jellies: Floating Phantoms galler...
2010-03-01
uses all available resources in some optimized manner. By further exploiting the design flexibility and computational efficiency of Orthogonal Frequency...in the following sections. 3.2.1 Estimation of PU Signal Statistics. The Estimate PU Signal Statis- tics function of Fig 3.4 is used to compute the...consecutive PU transmissions, and 4) the probability of transitioning from one transmission state to another. These statistics are then used to compute the
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
The use of self-organising maps for anomalous behaviour detection in a digital investigation.
Fei, B K L; Eloff, J H P; Olivier, M S; Venter, H S
2006-10-16
The dramatic increase in crime relating to the Internet and computers has caused a growing need for digital forensics. Digital forensic tools have been developed to assist investigators in conducting a proper investigation into digital crimes. In general, the bulk of the digital forensic tools available on the market permit investigators to analyse data that has been gathered from a computer system. However, current state-of-the-art digital forensic tools simply cannot handle large volumes of data in an efficient manner. With the advent of the Internet, many employees have been given access to new and more interesting possibilities via their desktop. Consequently, excessive Internet usage for non-job purposes and even blatant misuse of the Internet have become a problem in many organisations. Since storage media are steadily growing in size, the process of analysing multiple computer systems during a digital investigation can easily consume an enormous amount of time. Identifying a single suspicious computer from a set of candidates can therefore reduce human processing time and monetary costs involved in gathering evidence. The focus of this paper is to demonstrate how, in a digital investigation, digital forensic tools and the self-organising map (SOM)--an unsupervised neural network model--can aid investigators to determine anomalous behaviours (or activities) among employees (or computer systems) in a far more efficient manner. By analysing the different SOMs (one for each computer system), anomalous behaviours are identified and investigators are assisted to conduct the analysis more efficiently. The paper will demonstrate how the easy visualisation of the SOM enhances the ability of the investigators to interpret and explore the data generated by digital forensic tools so as to determine anomalous behaviours.
Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.
Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun
2017-07-01
In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.
Analysis of Algorithms: Coping with Hard Problems
ERIC Educational Resources Information Center
Kolata, Gina Bari
1974-01-01
Although today's computers can perform as many as one million operations per second, there are many problems that are still too large to be solved in a straightforward manner. Recent work indicates that many approximate solutions are useful and more efficient than exact solutions. (Author/RH)
Parallelisation study of a three-dimensional environmental flow model
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank
2014-03-01
There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.
A singularity free analytical solution of artificial satellite motion with drag
NASA Technical Reports Server (NTRS)
Mueller, A.
1978-01-01
An analytical satellite theory based on the regular, canonical Poincare-Similar (PS phi) elements is described along with an accurate density model which can be implemented into the drag theory. A computationally efficient manner in which to expand the equations of motion into a fourier series is discussed.
Enhancing Privacy in Participatory Sensing Applications with Multidimensional Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, Stephanie; He, Wenbo; Groat, Michael
2013-01-01
Participatory sensing applications rely on individuals to share personal data to produce aggregated models and knowledge. In this setting, privacy concerns can discourage widespread adoption of new applications. We present a privacy-preserving participatory sensing scheme based on negative surveys for both continuous and multivariate categorical data. Without relying on encryption, our algorithms enhance the privacy of sensed data in an energy and computation efficient manner. Simulations and implementation on Android smart phones illustrate how multidimensional data can be aggregated in a useful and privacy-enhancing manner.
Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations
NASA Astrophysics Data System (ADS)
Dokuz, A. S.; Celik, M.
2017-11-01
Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Sha, D.; Han, X.
2017-10-01
Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.
NASA Technical Reports Server (NTRS)
Teske, M. E.
1984-01-01
This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.
NASA Technical Reports Server (NTRS)
Schoen, A. H.; Rosenstein, H.; Stanzione, K.; Wisniewski, J. S.
1980-01-01
This report describes the use of the V/STOL Aircraft Sizing and Performance Computer Program (VASCOMP II). The program is useful in performing aircraft parametric studies in a quick and cost efficient manner. Problem formulation and data development were performed by the Boeing Vertol Company and reflects the present preliminary design technology. The computer program, written in FORTRAN IV, has a broad range of input parameters, to enable investigation of a wide variety of aircraft. User oriented features of the program include minimized input requirements, diagnostic capabilities, and various options for program flexibility.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri
2015-01-01
A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.
Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi
2016-11-08
We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
ERIC Educational Resources Information Center
Calvo-Ferrer, José Ramón
2017-01-01
According to different authors, computer games not only teach contents and skills, but also do so in a more efficient manner, allowing long-lasting learning. However, there is still little consensus on this matter as different studies put their educational benefits into question, especially when used without instructional support. An empirical…
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
Practical experimental certification of computational quantum gates using a twirling procedure.
Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond
2012-08-17
Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.
Password Cracking Using Sony Playstations
NASA Astrophysics Data System (ADS)
Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet
Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.
Clinical applications of cone beam computed tomography in endodontics: A comprehensive review.
Cohenca, Nestor; Shemesh, Hagay
2015-06-01
Cone beam computed tomography (CBCT) is a new technology that produces three-dimensional (3D) digital imaging at reduced cost and less radiation for the patient than traditional CT scans. It also delivers faster and easier image acquisition. By providing a 3D representation of the maxillofacial tissues in a cost- and dose-efficient manner, a better preoperative assessment can be obtained for diagnosis and treatment. This comprehensive review presents current applications of CBCT in endodontics. Specific case examples illustrate the difference in treatment planning with traditional periapical radiography versus CBCT technology.
Machine learning properties of binary wurtzite superlattices
Pilania, G.; Liu, X. -Y.
2018-01-12
The burgeoning paradigm of high-throughput computations and materials informatics brings new opportunities in terms of targeted materials design and discovery. The discovery process can be significantly accelerated and streamlined if one can learn effectively from available knowledge and past data to predict materials properties efficiently. Indeed, a very active area in materials science research is to develop machine learning based methods that can deliver automated and cross-validated predictive models using either already available materials data or new data generated in a targeted manner. In the present paper, we show that fast and accurate predictions of a wide range of propertiesmore » of binary wurtzite superlattices, formed by a diverse set of chemistries, can be made by employing state-of-the-art statistical learning methods trained on quantum mechanical computations in combination with a judiciously chosen numerical representation to encode materials’ similarity. These surrogate learning models then allow for efficient screening of vast chemical spaces by providing instant predictions of the targeted properties. Moreover, the models can be systematically improved in an adaptive manner, incorporate properties computed at different levels of fidelities and are naturally amenable to inverse materials design strategies. Finally, while the learning approach to make predictions for a wide range of properties (including structural, elastic and electronic properties) is demonstrated here for a specific example set containing more than 1200 binary wurtzite superlattices, the adopted framework is equally applicable to other classes of materials as well.« less
Machine learning properties of binary wurtzite superlattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, G.; Liu, X. -Y.
The burgeoning paradigm of high-throughput computations and materials informatics brings new opportunities in terms of targeted materials design and discovery. The discovery process can be significantly accelerated and streamlined if one can learn effectively from available knowledge and past data to predict materials properties efficiently. Indeed, a very active area in materials science research is to develop machine learning based methods that can deliver automated and cross-validated predictive models using either already available materials data or new data generated in a targeted manner. In the present paper, we show that fast and accurate predictions of a wide range of propertiesmore » of binary wurtzite superlattices, formed by a diverse set of chemistries, can be made by employing state-of-the-art statistical learning methods trained on quantum mechanical computations in combination with a judiciously chosen numerical representation to encode materials’ similarity. These surrogate learning models then allow for efficient screening of vast chemical spaces by providing instant predictions of the targeted properties. Moreover, the models can be systematically improved in an adaptive manner, incorporate properties computed at different levels of fidelities and are naturally amenable to inverse materials design strategies. Finally, while the learning approach to make predictions for a wide range of properties (including structural, elastic and electronic properties) is demonstrated here for a specific example set containing more than 1200 binary wurtzite superlattices, the adopted framework is equally applicable to other classes of materials as well.« less
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Warnaar, D. B.; Ling, B. J. AEHERSTROM, C. l. afkennedy, d
1986-01-01
A computer program is described which is especially suited for making vibration and buckling calculations for prestressed lattice structures that might be used for space application. Structures having repetitive geometry are treated in a very efficient manner. Detailed instructions for data input are given along with several example problems illustrating the use and capability of the program.
Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A
2010-01-01
Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.
Sensor network based vehicle classification and license plate identification system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J
Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform.more » Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.« less
Simulation of 3-D Nonequilibrium Seeded Air Flow in the NASA-Ames MHD Channel
NASA Technical Reports Server (NTRS)
Gupta, Sumeet; Tannehill, John C.; Mehta, Unmeel B.
2004-01-01
The 3-D nonequilibrium seeded air flow in the NASA-Ames experimental MHD channel has been numerically simulated. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed us ing a 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime: The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very efficient manner. The algorithm has been extended in the present study to account for nonequilibrium seeded air flows. The electrical conductivity of the flow is determined using the program of Park. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the seeded flow. The computed results are in good agreement with the experimental data.
Computational Fragment-Based Drug Design: Current Trends, Strategies, and Applications.
Bian, Yuemin; Xie, Xiang-Qun Sean
2018-04-09
Fragment-based drug design (FBDD) has become an effective methodology for drug development for decades. Successful applications of this strategy brought both opportunities and challenges to the field of Pharmaceutical Science. Recent progress in the computational fragment-based drug design provide an additional approach for future research in a time- and labor-efficient manner. Combining multiple in silico methodologies, computational FBDD possesses flexibilities on fragment library selection, protein model generation, and fragments/compounds docking mode prediction. These characteristics provide computational FBDD superiority in designing novel and potential compounds for a certain target. The purpose of this review is to discuss the latest advances, ranging from commonly used strategies to novel concepts and technologies in computational fragment-based drug design. Particularly, in this review, specifications and advantages are compared between experimental and computational FBDD, and additionally, limitations and future prospective are discussed and emphasized.
Efficient detection of a CW signal with a linear frequency drift
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.; Bailey, David H.
1989-01-01
An efficient method is presented for the detection of a continuous wave (CW) signal with a frequency drift that is linear in time. Signals of this type occur in transmissions between any two locations that are accelerating relative to one another, e.g., transmissions from the Voyager spacecraft. We assume that both the frequency and the drift are unknown. We also assume that the signal is weak compared to the Gaussian noise. The signal is partitioned into subsequences whose discrete Fourier transforms provide a sequence of instantaneous spectra at equal time intervals. These spectra are then accumulated with a shift that is proportional to time. When the shift is equal to the frequency drift, the signal to noise ratio increases and detection occurs. Here, we show how to compute these accumulations for many shifts in an efficient manner using a variety of Fast Fourier Transformations (FFT). Computing time is proportional to L log L where L is the length of the time series.
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick
2015-07-01
This study examined the extent to which a computer-based social skills intervention called FaceSay was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). FaceSay offers students simulated practice with eye gaze, joint attention, and facial recognition skills. This randomized control trial included school-aged children meeting educational criteria for autism (N = 31). Results demonstrated that participants who received the intervention improved their affect recognition and mentalizing skills, as well as their social skills. These findings suggest that, by targeting face-processing skills, computer-based interventions may produce changes in broader cognitive and social-skills domains in a cost- and time-efficient manner.
Scaling up to address data science challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne R.
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Scaling up to address data science challenges
Wendelberger, Joanne R.
2017-04-27
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Self-Organized Service Negotiation for Collaborative Decision Making
Zhang, Bo; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228
Self-organized service negotiation for collaborative decision making.
Zhang, Bo; Huang, Zhenhua; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J
2015-09-22
Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.
Study of shock-induced combustion using an implicit TVD scheme
NASA Technical Reports Server (NTRS)
Yungster, Shayne
1992-01-01
The supersonic combustion flowfields associated with various hypersonic propulsion systems, such as the ram accelerator, the oblique detonation wave engine, and the scramjet, are being investigated using a new computational fluid dynamics (CFD) code. The code solves the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. It employs an iterative method and a second order differencing scheme to improve computational efficiency. The code is currently being applied to study shock wave/boundary layer interactions in premixed combustible gases, and to investigate the ram accelerator concept. Results obtained for a ram accelerator configuration indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outward and downstream. The combustion process creates a high pressure region over the back of the projectile resulting in a net positive thrust forward.
Systems Research and Development Service Report of R&D Activity,
1980-05-01
failure and with the ARTCC and ARTS computers via a retain the data on the displays. The most patch panel. When the ATC system is critical position is...necessary delays any, and provides the necessary in the most efficient manner, commands to satisfy the established schedules. M&S also performs In...less most unpredictable segment since Large 12,500 current ATC practices differ widely 300.000 lbs. and are dependent upon load. Thus, if Heavy more
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Tutorial: Asteroseismic Stellar Modelling with AIMS
NASA Astrophysics Data System (ADS)
Lund, Mikkel N.; Reese, Daniel R.
The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.
Extending Moore's Law via Computationally Error Tolerant Computing.
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...
2018-03-01
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Extending Moore's Law via Computationally Error Tolerant Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1992-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The computer code uses the method of pseudo-compressibility with an upwind-differencing scheme for the convective fluxes and an implicit line-relaxation solution algorithm. The motivation for this work includes interest in studying the high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack, up to stall, is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared: a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time (on a CRAY YMP) per element in the airfoil configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Jitendra; HargroveJr., William Walter; Norman, Steven P
Vegetation canopy structure is a critically important habit characteristic for many threatened and endangered birds and other animal species, and it is key information needed by forest and wildlife managers for monitoring and managing forest resources, conservation planning and fostering biodiversity. Advances in Light Detection and Ranging (LiDAR) technologies have enabled remote sensing-based studies of vegetation canopies by capturing three-dimensional structures, yielding information not available in two-dimensional images of the landscape pro- vided by traditional multi-spectral remote sensing platforms. However, the large volume data sets produced by airborne LiDAR instruments pose a significant computational challenge, requiring algorithms to identify andmore » analyze patterns of interest buried within LiDAR point clouds in a computationally efficient manner, utilizing state-of-art computing infrastructure. We developed and applied a computationally efficient approach to analyze a large volume of LiDAR data and to characterize and map the vegetation canopy structures for 139,859 hectares (540 sq. miles) in the Great Smoky Mountains National Park. This study helps improve our understanding of the distribution of vegetation and animal habitats in this extremely diverse ecosystem.« less
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis
NASA Astrophysics Data System (ADS)
Aggarwal, Preeti; Sardana, H. K.
2010-02-01
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Automated Generation of Tabular Equations of State with Uncertainty Information
NASA Astrophysics Data System (ADS)
Carpenter, John H.; Robinson, Allen C.; Debusschere, Bert J.; Mattsson, Ann E.
2015-06-01
As computational science pushes toward higher fidelity prediction, understanding the uncertainty associated with closure models, such as the equation of state (EOS), has become a key focus. Traditional EOS development often involves a fair amount of art, where expert modelers may appear as magicians, providing what is felt to be the closest possible representation of the truth. Automation of the development process gives a means by which one may demystify the art of EOS, while simultaneously obtaining uncertainty information in a manner that is both quantifiable and reproducible. We describe our progress on the implementation of such a system to provide tabular EOS tables with uncertainty information to hydrocodes. Key challenges include encoding the artistic expert opinion into an algorithmic form and preserving the analytic models and uncertainty information in a manner that is both accurate and computationally efficient. Results are demonstrated on a multi-phase aluminum model. *Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Compressing random microstructures via stochastic Wang tilings.
Novák, Jan; Kučerová, Anna; Zeman, Jan
2012-10-01
This Rapid Communication presents a stochastic Wang tiling-based technique to compress or reconstruct disordered microstructures on the basis of given spatial statistics. Unlike the existing approaches based on a single unit cell, it utilizes a finite set of tiles assembled by a stochastic tiling algorithm, thereby allowing to accurately reproduce long-range orientation orders in a computationally efficient manner. Although the basic features of the method are demonstrated for a two-dimensional particulate suspension, the present framework is fully extensible to generic multidimensional media.
Man/computer communication in a space environment
NASA Technical Reports Server (NTRS)
Hodges, B. C.; Montoya, G.
1973-01-01
The present work reports on a study of the technology required to advance the state of the art in man/machine communications. The study involved the development and demonstration of both hardware and software to effectively implement man/computer interactive channels of communication. While tactile and visual man/computer communications equipment are standard methods of interaction with machines, man's speech is a natural media for inquiry and control. As part of this study, a word recognition unit was developed capable of recognizing a minimum of one hundred different words or sentences in any one of the currently used conversational languages. The study has proven that efficiency in communication between man and computer can be achieved when the vocabulary to be used is structured in a manner compatible with the rigid communication requirements of the machine while at the same time responsive to the informational needs of the man.
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
GAPIT: genome association and prediction integrated tool.
Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu
2012-09-15
Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.
Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.
Jeong, Doo Seok; Hwang, Cheol Seong
2018-04-18
Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Monitoring techniques and alarm procedures for CMS services and sites in WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.
2012-01-01
The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
Effect of virtual memory on efficient solution of two model problems
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.
1977-01-01
Computers with virtual memory architecture allow programs to be written as if they were small enough to be contained in memory. Two types of problems are investigated to show that this luxury can lead to quite an inefficient performance if the programmer does not interact strongly with the characteristics of the operating system when developing the program. The two problems considered are the simultaneous solutions of a large linear system of equations by Gaussian elimination and a model three-dimensional finite-difference problem. The Control Data STAR-100 computer runs are made to demonstrate the inefficiencies of programming the problems in the manner one would naturally do if the problems were indeed, small enough to be contained in memory. Program redesigns are presented which achieve large improvements in performance through changes in the computational procedure and the data base arrangement.
An Architecture for Cross-Cloud System Management
NASA Astrophysics Data System (ADS)
Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad
The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta; ...
2016-08-29
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less
Real-time deblurring of handshake blurred images on smartphones
NASA Astrophysics Data System (ADS)
Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser
2015-02-01
This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less
Protecting genomic data analytics in the cloud: state of the art and opportunities.
Tang, Haixu; Jiang, Xiaoqian; Wang, Xiaofeng; Wang, Shuang; Sofia, Heidi; Fox, Dov; Lauter, Kristin; Malin, Bradley; Telenti, Amalio; Xiong, Li; Ohno-Machado, Lucila
2016-10-13
The outsourcing of genomic data into public cloud computing settings raises concerns over privacy and security. Significant advancements in secure computation methods have emerged over the past several years, but such techniques need to be rigorously evaluated for their ability to support the analysis of human genomic data in an efficient and cost-effective manner. With respect to public cloud environments, there are concerns about the inadvertent exposure of human genomic data to unauthorized users. In analyses involving multiple institutions, there is additional concern about data being used beyond agreed research scope and being prcoessed in untrused computational environments, which may not satisfy institutional policies. To systematically investigate these issues, the NIH-funded National Center for Biomedical Computing iDASH (integrating Data for Analysis, 'anonymization' and SHaring) hosted the second Critical Assessment of Data Privacy and Protection competition to assess the capacity of cryptographic technologies for protecting computation over human genomes in the cloud and promoting cross-institutional collaboration. Data scientists were challenged to design and engineer practical algorithms for secure outsourcing of genome computation tasks in working software, whereby analyses are performed only on encrypted data. They were also challenged to develop approaches to enable secure collaboration on data from genomic studies generated by multiple organizations (e.g., medical centers) to jointly compute aggregate statistics without sharing individual-level records. The results of the competition indicated that secure computation techniques can enable comparative analysis of human genomes, but greater efficiency (in terms of compute time and memory utilization) are needed before they are sufficiently practical for real world environments.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1993-01-01
Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?
Strange nonchaotic attractors for computation
NASA Astrophysics Data System (ADS)
Sathish Aravindh, M.; Venkatesan, A.; Lakshmanan, M.
2018-05-01
We investigate the response of quasiperiodically driven nonlinear systems exhibiting strange nonchaotic attractors (SNAs) to deterministic input signals. We show that if one uses two square waves in an aperiodic manner as input to a quasiperiodically driven double-well Duffing oscillator system, the response of the system can produce logical output controlled by such a forcing. Changing the threshold or biasing of the system changes the output to another logic operation and memory latch. The interplay of nonlinearity and quasiperiodic forcing yields logical behavior, and the emergent outcome of such a system is a logic gate. It is further shown that the logical behaviors persist even for an experimental noise floor. Thus the SNA turns out to be an efficient tool for computation.
Numerical studies of the deposition of material released from fixed and rotary wing aircraft
NASA Technical Reports Server (NTRS)
Bilanin, A. J.; Teske, M. E.
1984-01-01
The computer code AGDISP (AGricultural DISPersal) has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern. In this report, the equations governing the motion of aerially released particles are developed, including a description of the evaporation model used. A series of case studies, using AGDISP, are included.
Boverman, Gregory; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J; Kao, Tzu-Jen; Amm, Bruce C; Wang, Xin; Davenport, David M; Chong, David H; Sahni, Rakesh; Ashe, Jeffrey M
2017-04-01
In electrical impedance tomography (EIT), we apply patterns of currents on a set of electrodes at the external boundary of an object, measure the resulting potentials at the electrodes, and, given the aggregate dataset, reconstruct the complex conductivity and permittivity within the object. It is possible to maximize sensitivity to internal conductivity changes by simultaneously applying currents and measuring potentials on all electrodes but this approach also maximizes sensitivity to changes in impedance at the interface. We have, therefore, developed algorithms to assess contact impedance changes at the interface as well as to efficiently and simultaneously reconstruct internal conductivity/permittivity changes within the body. We use simple linear algebraic manipulations, the generalized singular value decomposition, and a dual-mesh finite-element-based framework to reconstruct images in real time. We are also able to efficiently compute the linearized reconstruction for a wide range of regularization parameters and to compute both the generalized cross-validation parameter as well as the L-curve, objective approaches to determining the optimal regularization parameter, in a similarly efficient manner. Results are shown using data from a normal subject and from a clinical intensive care unit patient, both acquired with the GE GENESIS prototype EIT system, demonstrating significantly reduced boundary artifacts due to electrode drift and motion artifact.
Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.
Automated payload experiment tool feasibility study
NASA Technical Reports Server (NTRS)
Maddux, Gary A.; Clark, James; Delugach, Harry; Hammons, Charles; Logan, Julie; Provancha, Anna
1991-01-01
To achieve an environment less dependent on the flow of paper, automated techniques of data storage and retrieval must be utilized. The prototype under development seeks to demonstrate the ability of a knowledge-based, hypertext computer system. This prototype is concerned with the logical links between two primary NASA support documents, the Science Requirements Document (SRD) and the Engineering Requirements Document (ERD). Once developed, the final system should have the ability to guide a principal investigator through the documentation process in a more timely and efficient manner, while supplying more accurate information to the NASA payload developer.
NASA Technical Reports Server (NTRS)
Brown, D. C.
1971-01-01
The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1993-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The solution algorithm employs the method of pseudo compressibility and utilizes an upwind differencing scheme for the convective fluxes, and an implicit line-relaxation scheme. The motivation for this work includes interest in studying high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack up to stall is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared; a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time on a CRAY YMP per element in the airfoil configuration.
Fragment informatics and computational fragment-based drug design: an overview and update.
Sheng, Chunquan; Zhang, Wannian
2013-05-01
Fragment-based drug design (FBDD) is a promising approach for the discovery and optimization of lead compounds. Despite its successes, FBDD also faces some internal limitations and challenges. FBDD requires a high quality of target protein and good solubility of fragments. Biophysical techniques for fragment screening necessitate expensive detection equipment and the strategies for evolving fragment hits to leads remain to be improved. Regardless, FBDD is necessary for investigating larger chemical space and can be applied to challenging biological targets. In this scenario, cheminformatics and computational chemistry can be used as alternative approaches that can significantly improve the efficiency and success rate of lead discovery and optimization. Cheminformatics and computational tools assist FBDD in a very flexible manner. Computational FBDD can be used independently or in parallel with experimental FBDD for efficiently generating and optimizing leads. Computational FBDD can also be integrated into each step of experimental FBDD and help to play a synergistic role by maximizing its performance. This review will provide critical analysis of the complementarity between computational and experimental FBDD and highlight recent advances in new algorithms and successful examples of their applications. In particular, fragment-based cheminformatics tools, high-throughput fragment docking, and fragment-based de novo drug design will provide the focus of this review. We will also discuss the advantages and limitations of different methods and the trends in new developments that should inspire future research. © 2012 Wiley Periodicals, Inc.
Flow dynamics and energy efficiency of flow in the left ventricle during myocardial infarction.
Vasudevan, Vivek; Low, Adriel Jia Jun; Annamalai, Sarayu Parimal; Sampath, Smita; Poh, Kian Keong; Totman, Teresa; Mazlan, Muhammad; Croft, Grace; Richards, A Mark; de Kleijn, Dominique P V; Chin, Chih-Liang; Yap, Choon Hwai
2017-10-01
Cardiovascular disease is a leading cause of death worldwide, where myocardial infarction (MI) is a major category. After infarction, the heart has difficulty providing sufficient energy for circulation, and thus, understanding the heart's energy efficiency is important. We induced MI in a porcine animal model via circumflex ligation and acquired multiple-slice cine magnetic resonance (MR) images in a longitudinal manner-before infarction, and 1 week (acute) and 4 weeks (chronic) after infarction. Computational fluid dynamic simulations were performed based on MR images to obtain detailed fluid dynamics and energy dynamics of the left ventricles. Results showed that energy efficiency flow through the heart decreased at the acute time point. Since the heart was observed to experience changes in heart rate, stroke volume and chamber size over the two post-infarction time points, simulations were performed to test the effect of each of the three parameters. Increasing heart rate and stroke volume were found to significantly decrease flow energy efficiency, but the effect of chamber size was inconsistent. Strong complex interplay was observed between the three parameters, necessitating the use of non-dimensional parameterization to characterize flow energy efficiency. The ratio of Reynolds to Strouhal number, which is a form of Womersley number, was found to be the most effective non-dimensional parameter to represent energy efficiency of flow in the heart. We believe that this non-dimensional number can be computed for clinical cases via ultrasound and hypothesize that it can serve as a biomarker for clinical evaluations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherman, W.B.
2012-04-16
Synthetic DNA nanostructures are typically held together primarily by Holliday junctions. One of the most basic types of structures possible to assemble with only DNA and Holliday junctions is the triangle. To date, however, only equilateral triangles have been assembled in this manner - primarily because it is difficult to figure out what configurations of Holliday triangles have low strain. Early attempts at identifying such configurations relied upon calculations that followed the strained helical paths of DNA. Those methods, however, were computationally expensive, and failed to find many of the possible solutions. I have developed a new approach to identifyingmore » Holliday triangles that is computationally faster, and finds well over 95% of the possible solutions. The new approach is based on splitting the problem into two parts. The first part involves figuring out all the different ways that three featureless rods of the appropriate length and diameter can weave over and under one another to form a triangle. The second part of the computation entails seeing whether double helical DNA backbones can fit into the shape dictated by the rods in such a manner that the strands can cross over from one domain to the other at the appropriate spots. Structures with low strain (that is, good fit between the rods and the helices) on all three edges are recorded as promising for assembly.« less
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
AEGIS: A Lightweight Firewall for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Hossain, Mohammad Sajjad; Raghunathan, Vijay
Firewalls are an essential component in today's networked computing systems (desktops, laptops, and servers) and provide effective protection against a variety of over-the-network security attacks. With the development of technologies such as IPv6 and 6LoWPAN that pave the way for Internet-connected embedded systems and sensor networks, these devices will soon be subject to (and need to be defended against) similar security threats. As a first step, this paper presents Aegis, a lightweight, rule-based firewall for networked embedded systems such as wireless sensor networks. Aegis is based on a semantically rich, yet simple, rule definition language. In addition, Aegis is highly efficient during operation, runs in a transparent manner from running applications, and is easy to maintain. Experimental results obtained using real sensor nodes and cycle-accurate simulations demonstrate that Aegis successfully performs gatekeeping of a sensor node's communication traffic in a flexible manner with minimal overheads.
Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)
1999-01-01
A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
Neural network training by integration of adjoint systems of equations forward in time
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)
1992-01-01
A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...
2016-10-21
The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
NASA Astrophysics Data System (ADS)
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-02-12
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Chung, T. J.
2001-01-01
A parallelized version of the Flowfield Dependent Variation (FDV) Method is developed to analyze a problem of current research interest, the flowfield resulting from a triple shock/boundary layer interaction. Such flowfields are often encountered in the inlets of high speed air-breathing vehicles including the NASA Hyper-X research vehicle. In order to resolve the complex shock structure and to provide adequate resolution for boundary layer computations of the convective heat transfer from surfaces inside the inlet, models containing over 500,000 nodes are needed. Efficient parallelization of the computation is essential to achieving results in a timely manner. Results from a parallelization scheme, based upon multi-threading, as implemented on multiple processor supercomputers and workstations is presented.
Protein Hydration Thermodynamics: The Influence of Flexibility and Salt on Hydrophobin II Hydration.
Remsing, Richard C; Xi, Erte; Patel, Amish J
2018-04-05
The solubility of proteins and other macromolecular solutes plays an important role in numerous biological, chemical, and medicinal processes. An important determinant of protein solubility is the solvation free energy of the protein, which quantifies the overall strength of the interactions between the protein and the aqueous solution that surrounds it. Here we present an all-atom explicit-solvent computational framework for the rapid estimation of protein solvation free energies. Using this framework, we estimate the hydration free energy of hydrophobin II, an amphiphilic fungal protein, in a computationally efficient manner. We further explore how the protein hydration free energy is influenced by enhancing flexibility and by the addition of sodium chloride, and find that it increases in both cases, making protein hydration less favorable.
Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2
NASA Technical Reports Server (NTRS)
Mann, F. I.; Horsewood, J. L.
1978-01-01
The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-02-12
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.
Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras
NASA Astrophysics Data System (ADS)
Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota
2017-02-01
Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A
Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective geneticmore » algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.« less
A Novel Centrality Measure for Network-wide Cyber Vulnerability Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathanur, Arun V.; Haglin, David J.
In this work we propose a novel formulation that models the attack and compromise on a cyber network as a combination of two parts - direct compromise of a host and the compromise occurring through the spread of the attack on the network from a compromised host. The model parameters for the nodes are a concise representation of the host profiles that can include the risky behaviors of the associated human users while the model parameters for the edges are based on the existence of vulnerabilities between each pair of connected hosts. The edge models relate to the summary representationsmore » of the corresponding attack-graphs. This results in a formulation based on Random Walk with Restart (RWR) and the resulting centrality metric can be solved for in an efficient manner through the use of sparse linear solvers. Thus the formulation goes beyond mere topological considerations in centrality computations by summarizing the host profiles and the attack graphs into the model parameters. The computational efficiency of the method also allows us to also quantify the uncertainty in the centrality measure through Monte Carlo analysis.« less
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
Tractable Goal Selection with Oversubscribed Resources
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2009-01-01
We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Report #16-P-0019, October 27, 2015. OPP’s lack of policies and procedures to manage public pesticide petitions in a transparent and efficient manner can result in unreasonable delay lawsuits costing the agency time and resources.
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Accelerating Climate and Weather Simulations through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark
2011-01-01
Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.
Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing
NASA Astrophysics Data System (ADS)
Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey
Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.
IAServ: an intelligent home care web services platform in a cloud for aging-in-place.
Su, Chuan-Jun; Chiang, Chang-Yu
2013-11-12
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.
2016-01-01
Recent progress towards developing a new computational capability for accurate and efficient high-fidelity direct numerical simulation (DNS) and large-eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy- stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy, and is implemented in a computationally efficient manner on a modern high performance computer architecture. An inflow turbulence generation procedure based on a linear forcing approach has been incorporated in this framework and DNS conducted to study the effect of inflow turbulence on the suction- side separation bubble in low-pressure turbine (LPT) cascades. The T106 series of airfoil cascades in both lightly (T106A) and highly loaded (T106C) configurations at exit isentropic Reynolds numbers of 60,000 and 80,000, respectively, are considered. The numerical simulations are performed using 8th-order accurate spatial and 4th-order accurate temporal discretization. The changes in separation bubble topology due to elevated inflow turbulence is captured by the present method and the physical mechanisms leading to the changes are explained. The present results are in good agreement with prior numerical simulations but some expected discrepancies with the experimental data for the T106C case are noted and discussed.
IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place
Su, Chuan-Jun; Chiang, Chang-Yu
2013-01-01
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS.
Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin
2015-01-01
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.
Process Simulation of Gas Metal Arc Welding Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, Paul E.
2005-09-06
ARCWELDER is a Windows-based application that simulates gas metal arc welding (GMAW) of steel and aluminum. The software simulates the welding process in an accurate and efficient manner, provides menu items for process parameter selection, and includes a graphical user interface with the option to animate the process. The user enters the base and electrode material, open circuit voltage, wire diameter, wire feed speed, welding speed, and standoff distance. The program computes the size and shape of a square-groove or V-groove weld in the flat position. The program also computes the current, arc voltage, arc length, electrode extension, transfer ofmore » droplets, heat input, filler metal deposition, base metal dilution, and centerline cooling rate, in English or SI units. The simulation may be used to select welding parameters that lead to desired operation conditions.« less
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS
Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin
2015-01-01
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%. PMID:26504488
Applications of intelligent computer-aided training
NASA Technical Reports Server (NTRS)
Loftin, R. B.; Savely, Robert T.
1991-01-01
Intelligent computer-aided training (ICAT) systems simulate the behavior of an experienced instructor observing a trainee, responding to help requests, diagnosing and remedying trainee errors, and proposing challenging new training scenarios. This paper presents a generic ICAT architecture that supports the efficient development of ICAT systems for varied tasks. In addition, details of ICAT projects, built with this architecture, that deliver specific training for Space Shuttle crew members, ground support personnel, and flight controllers are presented. Concurrently with the creation of specific ICAT applications, a general-purpose software development environment for ICAT systems is being built. The widespread use of such systems for both ground-based and on-orbit training will serve to preserve task and training expertise, support the training of large numbers of personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-01-01
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632
Mathematical and computational model for the analysis of micro hybrid rocket motor
NASA Astrophysics Data System (ADS)
Stoia-Djeska, Marius; Mingireanu, Florin
2012-11-01
The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.
The molecular basis of memory. Part 2: chemistry of the tripartite mechanism.
Marx, Gerard; Gilon, Chaim
2013-06-19
We propose a tripartite mechanism to describe the processing of cognitive information (cog-info), comprising the (1) neuron, (2) surrounding neural extracellular matrix (nECM), and (3) numerous "trace" metals distributed therein. The neuron is encased in a polyanionic nECM lattice doped with metals (>10), wherein it processes (computes) and stores cog-info. Each [nECM:metal] complex is the molecular correlate of a cognitive unit of information (cuinfo), similar to a computer "bit". These are induced/sensed by the neuron via surface iontophoretic and electroelastic (piezoelectric) sensors. The generic cuinfo are used by neurons to biochemically encode and store cog-info in a rapid, energy efficient, but computationally expansive manner. Here, we describe chemical reactions involved in various processes that underline the tripartite mechanism. In addition, we present novel iconographic representations of various types of cuinfo resulting from"tagging" and cross-linking reactions, essential for the indexing cuinfo for organized retrieval and storage of memory.
On configurational forces for gradient-enhanced inelasticity
NASA Astrophysics Data System (ADS)
Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth
2018-04-01
In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.
Acoustic technology for high-performance disruption and extraction of plant proteins.
Toorchi, Mahmoud; Nouri, Mohammad-Zaman; Tsumura, Makoto; Komatsu, Setsuko
2008-07-01
Acoustic technology shows the capability of protein pellet homogenization from different tissue samples of soybean and rice in a manner comparable to the ordinary mortar/pestle method and far better than the vortex/ultrasonic method with respect to the resolution of the protein pattern through two-dimensional polyacrylamide gel electrophoresis (2D-PAGE). With acoustic technology, noncontact tissue disruption and protein pellet homogenization can be carried out in a computer-controlled manner, which ultimately increases the efficiency of the process for a large number of samples. A lysis buffer termed the T-buffer containing TBP, thiourea, and CHAPS yields an excellent result for the 2D-PAGE separation of soybean plasma membrane proteins followed by the 2D-PAGE separation of crude protein of soybean and rice tissues. For this technology, the T-buffer is preferred because protein quantification is possible by eliminating the interfering compound 2-mercaptoethanol and because of the high reproducibility of 2D-PAGE separation.
Numerical Simulation of 3-D Supersonic Viscous Flow in an Experimental MHD Channel
NASA Technical Reports Server (NTRS)
Kato, Hiromasa; Tannehill, John C.; Gupta, Sumeet; Mehta, Unmeel B.
2004-01-01
The 3-D supersonic viscous flow in an experimental MHD channel has been numerically simulated. The experimental MHD channel is currently in operation at NASA Ames Research Center. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed using a new 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime. The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very e5uent manner. To account for upstream (elliptic) effects, the flowfield can be computed using multiple streamwise sweeps with an iterated PNS algorithm. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the flow. The computed results are in good agreement with the available experimental data.
Enabling BOINC in infrastructure as a service cloud system
NASA Astrophysics Data System (ADS)
Montes, Diego; Añel, Juan A.; Pena, Tomás F.; Uhe, Peter; Wallom, David C. H.
2017-02-01
Volunteer or crowd computing is becoming increasingly popular for solving complex research problems from an increasingly diverse range of areas. The majority of these have been built using the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which provides a range of different services to manage all computation aspects of a project. The BOINC system is ideal in those cases where not only does the research community involved need low-cost access to massive computing resources but also where there is a significant public interest in the research being done.We discuss the way in which cloud services can help BOINC-based projects to deliver results in a fast, on demand manner. This is difficult to achieve using volunteers, and at the same time, using scalable cloud resources for short on demand projects can optimize the use of the available resources. We show how this design can be used as an efficient distributed computing platform within the cloud, and outline new approaches that could open up new possibilities in this field, using Climateprediction.net (http://www.climateprediction.net/) as a case study.
Goal Selection for Embedded Systems with Oversubscribed Resources
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2010-01-01
We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
Advanced laser modeling with BLAZE multiphysics
NASA Astrophysics Data System (ADS)
Palla, Andrew D.; Carroll, David L.; Gray, Michael I.; Suzuki, Lui
2017-01-01
The BLAZE Multiphysics™ software simulation suite was specifically developed to model highly complex multiphysical systems in a computationally efficient and highly scalable manner. These capabilities are of particular use when applied to the complexities associated with high energy laser systems that combine subsonic/transonic/supersonic fluid dynamics, chemically reacting flows, laser electronics, heat transfer, optical physics, and in some cases plasma discharges. In this paper we present detailed cw and pulsed gas laser calculations using the BLAZE model with comparisons to data. Simulations of DPAL, XPAL, ElectricOIL (EOIL), and the optically pumped rare gas laser were found to be in good agreement with experimental data.
Onboard Run-Time Goal Selection for Autonomous Operations
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2010-01-01
We describe an efficient, online goal selection algorithm for use onboard spacecraft and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.
NASA Astrophysics Data System (ADS)
Sanders, Sören; Holthaus, Martin
2017-10-01
We study the connection between the exponent of the order parameter of the Mott insulator-to-superfluid transition occurring in the two-dimensional Bose-Hubbard model, and the divergence exponents of its one- and two-particle correlation functions. We find that at the multicritical points all divergence exponents are related to each other, allowing us to express the critical exponent in terms of one single divergence exponent. This approach correctly reproduces the critical exponent of the three-dimensional XY universality class. Because divergence exponents can be computed in an efficient manner by hypergeometric analytic continuation, our strategy is applicable to a wide class of systems.
Cloud-Based Applications for Organizing and Reviewing Plastic Surgery Content
Luan, Anna; Momeni, Arash; Lee, Gordon K.
2015-01-01
Cloud-based applications including Box, Dropbox, Google Drive, Evernote, Notability, and Zotero are available for smartphones, tablets, and laptops and have revolutionized the manner in which medical students and surgeons read and utilize plastic surgery literature. Here we provide an overview of the use of Cloud computing in practice and propose an algorithm for organizing the vast amount of plastic surgery literature. Given the incredible amount of data being produced in plastic surgery and other surgical subspecialties, it is prudent for plastic surgeons to lead the process of providing solutions for the efficient organization and effective integration of the ever-increasing data into clinical practice. PMID:26576208
Well-balanced Schemes for Gravitationally Stratified Media
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2015-10-01
We present a well-balanced scheme for the Euler equations with gravitation. The scheme is capable of maintaining exactly (up to machine precision) a discrete hydrostatic equilibrium without any assumption on a thermodynamic variable such as specific entropy or temperature. The well-balanced scheme is based on a local hydrostatic pressure reconstruction. Moreover, it is computationally efficient and can be incorporated into any existing algorithm in a straightforward manner. The presented scheme improves over standard ones especially when flows close to a hydrostatic equilibrium have to be simulated. The performance of the well-balanced scheme is demonstrated on an astrophysically relevant application: a toy model for core-collapse supernovae.
PLATSIM: A Simulation and Analysis Package for Large-Order Flexible Systems. Version 2.0
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Kenny, Sean P.; Giesy, Daniel P.
1997-01-01
The software package PLATSIM provides efficient time and frequency domain analysis of large-order generic space platforms. PLATSIM can perform open-loop analysis or closed-loop analysis with linear or nonlinear control system models. PLATSIM exploits the particular form of sparsity of the plant matrices for very efficient linear and nonlinear time domain analysis, as well as frequency domain analysis. A new, original algorithm for the efficient computation of open-loop and closed-loop frequency response functions for large-order systems has been developed and is implemented within the package. Furthermore, a novel and efficient jitter analysis routine which determines jitter and stability values from time simulations in a very efficient manner has been developed and is incorporated in the PLATSIM package. In the time domain analysis, PLATSIM simulates the response of the space platform to disturbances and calculates the jitter and stability values from the response time histories. In the frequency domain analysis, PLATSIM calculates frequency response function matrices and provides the corresponding Bode plots. The PLATSIM software package is written in MATLAB script language. A graphical user interface is developed in the package to provide convenient access to its various features.
A search asymmetry reversed by figure-ground assignment.
Humphreys, G W; Müller, H
2000-05-01
We report evidence demonstrating that a search asymmetry favoring concave over convex targets can be reversed by altering the figure-ground assignment of edges in shapes. Visual search for a concave target among convex distractors is faster than search for a convex target among concave distractors (a search asymmetry). By using shapes with ambiguous local figure-ground relations, we demonstrated that search can be efficient (with search slopes around 10 ms/item) or inefficient (with search slopes around 30-40 ms/item) with the same stimuli, depending on whether edges are assigned to concave or convex "figures." This assignment process can operate in a top-down manner, according to the task set. The results suggest that attention is allocated to spatial regions following the computation of figure-ground relations in parallel across the elements present. This computation can also be modulated by top-down processes.
Computation of losses in a scramjet combustor
NASA Technical Reports Server (NTRS)
Kamath, Pradeep S.; Mcclinton, Charles R.
1992-01-01
The losses in a conceptual scramjet combustor at flight Mach numbers of 8, 10, 12, 16 and 20 are computed. These losses are extracted from three-dimensional parabolized Navier-Stokes solutions of the turbulent, reacting combustor flow field. A combustor performance index was defined based on the rationale that an efficient scramjet combustor should add heat to the fluid in such a manner as to maximize the stream thrust at the combustor exit while minimizing the losses. This index showed a decrease of more than 40 percent as the flight Mach number increased from 8 to 20, indicative of a drop in the thrust-producing potential of the scramjet at the upper end of the speed regime studied. A breakdown of the losses showed that dissipation, nonequilibrium chemistry and heat diffusion contributed roughly 15 percent, 35 percent, and 50 percent to the irreversible increase in entropy at Mach 8 and 22 percent, 13 and 65 percent at Mach 20.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
Cloud GIS Based Watershed Management
NASA Astrophysics Data System (ADS)
Bediroğlu, G.; Colak, H. E.
2017-11-01
In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.
A new procedure for dynamic adaption of three-dimensional unstructured grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, R.N.
1990-02-28
The Inspection Shop at Lawrence Livermore Lab recently purchased a Sheffield Apollo RS50 Direct Computer Control Coordinate Measuring Machine. The performance of the machine was specified to conform to B89 standard which relies heavily upon using the measuring machine in its intended manner to verify its accuracy (rather than parametric tests). Although it would be possible to use the interactive measurement system to perform these tasks, a more thorough and efficient job can be done by creating Function Library programs for certain tasks which integrate Hewlett-Packard Basic 5.0 language and calls to proprietary analysis and machine control routines. This combinationmore » provides efficient use of the measuring machine with a minimum of keyboard input plus an analysis of the data with respect to the B89 Standard rather than a CMM analysis which would require subsequent interpretation. This paper discusses some characteristics of the Sheffield machine control and analysis software and my use of H-P Basic language to create automated measurement programs to support the B89 performance evaluation of the CMM. 1 ref.« less
Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.
Skoneczny, Szymon
2015-01-01
The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.
Chi, Chia-Fen; Tseng, Li-Kai; Jang, Yuh
2012-07-01
Many disabled individuals lack extensive knowledge about assistive technology, which could help them use computers. In 1997, Denis Anson developed a decision tree of 49 evaluative questions designed to evaluate the functional capabilities of the disabled user and choose an appropriate combination of assistive devices, from a selection of 26, that enable the individual to use a computer. In general, occupational therapists guide the disabled users through this process. They often have to go over repetitive questions in order to find an appropriate device. A disabled user may require an alphanumeric entry device, a pointing device, an output device, a performance enhancement device, or some combination of these. Therefore, the current research eliminates redundant questions and divides Anson's decision tree into multiple independent subtrees to meet the actual demand of computer users with disabilities. The modified decision tree was tested by six disabled users to prove it can determine a complete set of assistive devices with a smaller number of evaluative questions. The means to insert new categories of computer-related assistive devices was included to ensure the decision tree can be expanded and updated. The current decision tree can help the disabled users and assistive technology practitioners to find appropriate computer-related assistive devices that meet with clients' individual needs in an efficient manner.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
Management of data from clinical trials using the ArchiMed system.
Duftschmid, Georg; Gall, Walter; Eigenbauer, Ernst; Dorda, Wolfgang
2002-06-01
Clinical trials constitute a key source of medical research and are therefore conducted on a regular basis at university hospitals. The professional execution of trials requires, among other things, a repertoire of tools that support efficient data management. Tasks that are essential for efficient data management in clinical trials include the following: the design of the trial database, the design of electronic case report forms, recruiting patients, collection of data, and statistical analysis. The present article reports the manner in which these tasks are supported by the ArchiMed system at the University of Vienna and Graz Medical Schools. ArchiMed is customized for clinical end users, allowing them to autonomously manage their clinical trials without having to consult computer experts. An evaluation of the ArchiMed system in 12 trials recently conducted at the University of Vienna Medical School shows that the individual system functions can be usefully applied for data management in clinical trials.
paraGSEA: a scalable approach for large-scale gene expression profiling
Peng, Shaoliang; Yang, Shunyun
2017-01-01
Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463
‘Computational toxicology’ is a broad term that encompasses all manner of computer-facilitated informatics, data-mining, and modeling endeavors in relation to toxicology, including exposure modeling, physiologically based pharmacokinetic (PBPK) modeling, dose-response modeling, ...
Computational Protein Engineering: Bridging the Gap between Rational Design and Laboratory Evolution
Barrozo, Alexandre; Borstnar, Rok; Marloie, Gaël; Kamerlin, Shina Caroline Lynn
2012-01-01
Enzymes are tremendously proficient catalysts, which can be used as extracellular catalysts for a whole host of processes, from chemical synthesis to the generation of novel biofuels. For them to be more amenable to the needs of biotechnology, however, it is often necessary to be able to manipulate their physico-chemical properties in an efficient and streamlined manner, and, ideally, to be able to train them to catalyze completely new reactions. Recent years have seen an explosion of interest in different approaches to achieve this, both in the laboratory, and in silico. There remains, however, a gap between current approaches to computational enzyme design, which have primarily focused on the early stages of the design process, and laboratory evolution, which is an extremely powerful tool for enzyme redesign, but will always be limited by the vastness of sequence space combined with the low frequency for desirable mutations. This review discusses different approaches towards computational enzyme design and demonstrates how combining newly developed screening approaches that can rapidly predict potential mutation “hotspots” with approaches that can quantitatively and reliably dissect the catalytic step can bridge the gap that currently exists between computational enzyme design and laboratory evolution studies. PMID:23202907
Usability of a patient education and motivation tool using heuristic evaluation.
Joshi, Ashish; Arora, Mohit; Dai, Liwei; Price, Kathleen; Vizer, Lisa; Sears, Andrew
2009-11-06
Computer-mediated educational applications can provide a self-paced, interactive environment to deliver educational content to individuals about their health condition. These programs have been used to deliver health-related information about a variety of topics, including breast cancer screening, asthma management, and injury prevention. We have designed the Patient Education and Motivation Tool (PEMT), an interactive computer-based educational program based on behavioral, cognitive, and humanistic learning theories. The tool is designed to educate users and has three key components: screening, learning, and evaluation. The objective of this tutorial is to illustrate a heuristic evaluation using a computer-based patient education program (PEMT) as a case study. The aims were to improve the usability of PEMT through heuristic evaluation of the interface; to report the results of these usability evaluations; to make changes based on the findings of the usability experts; and to describe the benefits and limitations of applying usability evaluations to PEMT. PEMT was evaluated by three usability experts using Nielsen's usability heuristics while reviewing the interface to produce a list of heuristic violations with severity ratings. The violations were sorted by heuristic and ordered from most to least severe within each heuristic. A total of 127 violations were identified with a median severity of 3 (range 0 to 4 with 0 = no problem to 4 = catastrophic problem). Results showed 13 violations for visibility (median severity = 2), 38 violations for match between system and real world (median severity = 2), 6 violations for user control and freedom (median severity = 3), 34 violations for consistency and standards (median severity = 2), 11 violations for error severity (median severity = 3), 1 violation for recognition and control (median severity = 3), 7 violations for flexibility and efficiency (median severity = 2), 9 violations for aesthetic and minimalist design (median severity = 2), 4 violations for help users recognize, diagnose, and recover from errors (median severity = 3), and 4 violations for help and documentation (median severity = 4). We describe the heuristic evaluation method employed to assess the usability of PEMT, a method which uncovers heuristic violations in the interface design in a quick and efficient manner. Bringing together usability experts and health professionals to evaluate a computer-mediated patient education program can help to identify problems in a timely manner. This makes this method particularly well suited to the iterative design process when developing other computer-mediated health education programs. Heuristic evaluations provided a means to assess the user interface of PEMT.
Usability of a Patient Education and Motivation Tool Using Heuristic Evaluation
Arora, Mohit; Dai, Liwei; Price, Kathleen; Vizer, Lisa; Sears, Andrew
2009-01-01
Background Computer-mediated educational applications can provide a self-paced, interactive environment to deliver educational content to individuals about their health condition. These programs have been used to deliver health-related information about a variety of topics, including breast cancer screening, asthma management, and injury prevention. We have designed the Patient Education and Motivation Tool (PEMT), an interactive computer-based educational program based on behavioral, cognitive, and humanistic learning theories. The tool is designed to educate users and has three key components: screening, learning, and evaluation. Objective The objective of this tutorial is to illustrate a heuristic evaluation using a computer-based patient education program (PEMT) as a case study. The aims were to improve the usability of PEMT through heuristic evaluation of the interface; to report the results of these usability evaluations; to make changes based on the findings of the usability experts; and to describe the benefits and limitations of applying usability evaluations to PEMT. Methods PEMT was evaluated by three usability experts using Nielsen’s usability heuristics while reviewing the interface to produce a list of heuristic violations with severity ratings. The violations were sorted by heuristic and ordered from most to least severe within each heuristic. Results A total of 127 violations were identified with a median severity of 3 (range 0 to 4 with 0 = no problem to 4 = catastrophic problem). Results showed 13 violations for visibility (median severity = 2), 38 violations for match between system and real world (median severity = 2), 6 violations for user control and freedom (median severity = 3), 34 violations for consistency and standards (median severity = 2), 11 violations for error severity (median severity = 3), 1 violation for recognition and control (median severity = 3), 7 violations for flexibility and efficiency (median severity = 2), 9 violations for aesthetic and minimalist design (median severity = 2), 4 violations for help users recognize, diagnose, and recover from errors (median severity = 3), and 4 violations for help and documentation (median severity = 4). Conclusion We describe the heuristic evaluation method employed to assess the usability of PEMT, a method which uncovers heuristic violations in the interface design in a quick and efficient manner. Bringing together usability experts and health professionals to evaluate a computer-mediated patient education program can help to identify problems in a timely manner. This makes this method particularly well suited to the iterative design process when developing other computer-mediated health education programs. Heuristic evaluations provided a means to assess the user interface of PEMT. PMID:19897458
PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems
NASA Technical Reports Server (NTRS)
Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.
1995-01-01
PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.
Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan
2011-05-01
Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
Bio-inspired approach to multistage image processing
NASA Astrophysics Data System (ADS)
Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan
2017-08-01
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
Classification with spatio-temporal interpixel class dependency contexts
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David A.
1992-01-01
A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.
A FFT-based formulation for discrete dislocation dynamics in heterogeneous media
NASA Astrophysics Data System (ADS)
Bertin, N.; Capolungo, L.
2018-02-01
In this paper, an extension of the DDD-FFT approach presented in [1] is developed for heterogeneous elasticity. For such a purpose, an iterative spectral formulation in which convolutions are calculated in the Fourier space is developed to solve for the mechanical state associated with the discrete eigenstrain-based microstructural representation. With this, the heterogeneous DDD-FFT approach is capable of treating anisotropic and heterogeneous elasticity in a computationally efficient manner. In addition, a GPU implementation is presented to allow for further acceleration. As a first example, the approach is used to investigate the interaction between dislocations and second-phase particles, thereby demonstrating its ability to inherently incorporate image forces arising from elastic inhomogeneities.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
He, Chenlong; Feng, Zuren; Ren, Zhigang
2018-02-03
For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.
Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range
Feng, Zuren; Ren, Zhigang
2018-01-01
For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham’s Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm. PMID:29401649
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2017-09-01
In recent years, the increasing incidence of climate-related disasters has tremendously affected our environment. In order to effectively manage and reduce dramatic impacts of such events, the development of timely disaster management plans is essential. Since these disasters are spatial phenomena, timely provision of geospatial information is crucial for effective development of response and management plans. Due to inaccessibility of the affected areas and limited budget of first-responders, timely acquisition of the required geospatial data for these applications is usually possible only using low-cost imaging and georefencing sensors mounted on unmanned platforms. Despite rapid collection of the required data using these systems, available processing techniques are not yet capable of delivering geospatial information to responders and decision makers in a timely manner. To address this issue, this paper introduces a new technique for dense 3D reconstruction of the affected scenes which can deliver and improve the needed geospatial information incrementally. This approach is implemented based on prior 3D knowledge of the scene and employs computationally-efficient 2D triangulation, feature descriptor, feature matching and point verification techniques to optimize and speed up 3D dense scene reconstruction procedure. To verify the feasibility and computational efficiency of the proposed approach, an experiment using a set of consecutive images collected onboard a UAV platform and prior low-density airborne laser scanning over the same area is conducted and step by step results are provided. A comparative analysis of the proposed approach and an available image-based dense reconstruction technique is also conducted to prove the computational efficiency and competency of this technique for delivering geospatial information with pre-specified accuracy.
A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm
NASA Astrophysics Data System (ADS)
Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.
2016-10-01
Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.
Progressive Damage and Failure Analysis of Composite Laminates
NASA Astrophysics Data System (ADS)
Joseph, Ashith P. K.
Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.
Individual muscle control using an exoskeleton robot for muscle function testing.
Ueda, Jun; Ming, Ding; Krishnamoorthy, Vijaya; Shinohara, Minoru; Ogasawara, Tsukasa
2010-08-01
Healthy individuals modulate muscle activation patterns according to their intended movement and external environment. Persons with neurological disorders (e.g., stroke and spinal cord injury), however, have problems in movement control due primarily to their inability to modulate their muscle activation pattern in an appropriate manner. A functionality test at the level of individual muscles that investigates the activity of a muscle of interest on various motor tasks may enable muscle-level force grading. To date there is no extant work that focuses on the application of exoskeleton robots to induce specific muscle activation in a systematic manner. This paper proposes a new method, named "individual muscle-force control" using a wearable robot (an exoskeleton robot, or a power-assisting device) to obtain a wider variety of muscle activity data than standard motor tasks, e.g., pushing a handle by hand. A computational algorithm systematically computes control commands to a wearable robot so that a desired muscle activation pattern for target muscle forces is induced. It also computes an adequate amount and direction of a force that a subject needs to exert against a handle by his/her hand. This individual muscle control method enables users (e.g., therapists) to efficiently conduct neuromuscular function tests on target muscles by arbitrarily inducing muscle activation patterns. This paper presents a basic concept, mathematical formulation, and solution of the individual muscle-force control and its implementation to a muscle control system with an exoskeleton-type robot for upper extremity. Simulation and experimental results in healthy individuals justify the use of an exoskeleton robot for future muscle function testing in terms of the variety of muscle activity data.
Model of Mixing Layer With Multicomponent Evaporating Drops
NASA Technical Reports Server (NTRS)
Bellan, Josette; Le Clercq, Patrick
2004-01-01
A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas. Drop evaporation leads to a change in the gas-phase composition, which, like the composition of the drops, is described in a probabilistic manner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Ranjan; Chelmis, Charalampos; Aman, Saima
The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less
GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit
Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik
2013-01-01
Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
Model reduction method using variable-separation for stochastic saddle point problems
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
An Investigation of Primary School Science Teachers' Use of Computer Applications
ERIC Educational Resources Information Center
Ocak, Mehmet Akif; Akdemir, Omur
2008-01-01
This study investigated the level and frequency of science teachers' use of computer applications as an instructional tool in the classroom. The manner and frequency of science teachers' use of computer, their perceptions about integration of computer applications, and other factors contributed to changes in their computer literacy are…
NASA Astrophysics Data System (ADS)
Shukla, Chitra; Thapliyal, Kishore; Pathak, Anirban
2017-12-01
Semi-quantum protocols that allow some of the users to remain classical are proposed for a large class of problems associated with secure communication and secure multiparty computation. Specifically, first-time semi-quantum protocols are proposed for key agreement, controlled deterministic secure communication and dialogue, and it is shown that the semi-quantum protocols for controlled deterministic secure communication and dialogue can be reduced to semi-quantum protocols for e-commerce and private comparison (socialist millionaire problem), respectively. Complementing with the earlier proposed semi-quantum schemes for key distribution, secret sharing and deterministic secure communication, set of schemes proposed here and subsequent discussions have established that almost every secure communication and computation tasks that can be performed using fully quantum protocols can also be performed in semi-quantum manner. Some of the proposed schemes are completely orthogonal-state-based, and thus, fundamentally different from the existing semi-quantum schemes that are conjugate coding-based. Security, efficiency and applicability of the proposed schemes have been discussed with appropriate importance.
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.
1977-01-01
New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.
Muon Trigger for Mobile Phones
NASA Astrophysics Data System (ADS)
Borisyak, M.; Usvyatsov, M.; Mulhearn, M.; Shimmin, C.; Ustyuzhanin, A.
2017-10-01
The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case. We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.
Machine learning strategy for accelerated design of polymer dielectrics
Mannodi-Kanakkithodi, Arun; Pilania, Ghanshyam; Huan, Tran Doan; ...
2016-02-15
The ability to efficiently design new and advanced dielectric polymers is hampered by the lack of sufficient, reliable data on wide polymer chemical spaces, and the difficulty of generating such data given time and computational/experimental constraints. Here, we address the issue of accelerating polymer dielectrics design by extracting learning models from data generated by accurate state-of-the-art first principles computations for polymers occupying an important part of the chemical subspace. The polymers are ‘fingerprinted’ as simple, easily attainable numerical representations, which are mapped to the properties of interest using a machine learning algorithm to develop an on-demand property prediction model. Further,more » a genetic algorithm is utilised to optimise polymer constituent blocks in an evolutionary manner, thus directly leading to the design of polymers with given target properties. Furthermore, while this philosophy of learning to make instant predictions and design is demonstrated here for the example of polymer dielectrics, it is equally applicable to other classes of materials as well.« less
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Boundary-Layer Receptivity and Integrated Transition Prediction
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Choudhari, Meelan
2005-01-01
The adjoint parabold stability equations (PSE) formulation is used to calculate the boundary layer receptivity to localized surface roughness and suction for compressible boundary layers. Receptivity efficiency functions predicted by the adjoint PSE approach agree well with results based on other nonparallel methods including linearized Navier-Stokes equations for both Tollmien-Schlichting waves and crossflow instability in swept wing boundary layers. The receptivity efficiency function can be regarded as the Green's function to the disturbance amplitude evolution in a nonparallel (growing) boundary layer. Given the Fourier transformed geometry factor distribution along the chordwise direction, the linear disturbance amplitude evolution for a finite size, distributed nonuniformity can be computed by evaluating the integral effects of both disturbance generation and linear amplification. The synergistic approach via the linear adjoint PSE for receptivity and nonlinear PSE for disturbance evolution downstream of the leading edge forms the basis for an integrated transition prediction tool. Eventually, such physics-based, high fidelity prediction methods could simulate the transition process from the disturbance generation through the nonlinear breakdown in a holistic manner.
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
NASA Astrophysics Data System (ADS)
Tian, Xin; Negenborn, Rudy R.; van Overloop, Peter-Jules; María Maestre, José; Sadowska, Anna; van de Giesen, Nick
2017-11-01
Model Predictive Control (MPC) is one of the most advanced real-time control techniques that has been widely applied to Water Resources Management (WRM). MPC can manage the water system in a holistic manner and has a flexible structure to incorporate specific elements, such as setpoints and constraints. Therefore, MPC has shown its versatile performance in many branches of WRM. Nonetheless, with the in-depth understanding of stochastic hydrology in recent studies, MPC also faces the challenge of how to cope with hydrological uncertainty in its decision-making process. A possible way to embed the uncertainty is to generate an Ensemble Forecast (EF) of hydrological variables, rather than a deterministic one. The combination of MPC and EF results in a more comprehensive approach: Multi-scenario MPC (MS-MPC). In this study, we will first assess the model performance of MS-MPC, considering an ensemble streamflow forecast. Noticeably, the computational inefficiency may be a critical obstacle that hinders applicability of MS-MPC. In fact, with more scenarios taken into account, the computational burden of solving an optimization problem in MS-MPC accordingly increases. To deal with this challenge, we propose the Adaptive Control Resolution (ACR) approach as a computationally efficient scheme to practically reduce the number of control variables in MS-MPC. In brief, the ACR approach uses a mixed-resolution control time step from the near future to the distant future. The ACR-MPC approach is tested on a real-world case study: an integrated flood control and navigation problem in the North Sea Canal of the Netherlands. Such an approach reduces the computation time by 18% and up in our case study. At the same time, the model performance of ACR-MPC remains close to that of conventional MPC.
Structure and Thermodynamics of Polyolefin Melts
NASA Astrophysics Data System (ADS)
Weinhold, J. D.; Curro, J. G.; Habenschuss, A.; Londono, J. D.
1997-03-01
Subtle differences in the intermolecular packing of various polyolefins can create dissimilar permeability and mixing behavior. We have used a combination of the Polymer Reference Interaction Site Model (PRISM) and Monte Carlo simulation to study the structural and thermodynamic properties of realistic models for polyolefins. Results for polyisobutylene and syndiotactic polypropylene will be presented along with comparisons to wide-angle x-ray scattering experiments and properties determined from previous studies of polyethylene and isotactic polypropylene. Our technique uses a Monte Carlo simulation on an isolated molecule to determine the polymer's intramolecular structure. With this information, PRISM theory can predict the intermolecular packing for any liquid density and/or mixture composition in a computationally efficient manner. This approach will then be used to explore the mixing behavior of these polyolefins.
NASA Technical Reports Server (NTRS)
1987-01-01
The United States and other countries face the problem of waste disposal in an economical, environmentally safe manner. A widely applied solution adopted by Americans is "waste to energy," incinerating the refuse and using the steam produced by trash burning to drive an electricity producing generator. NASA's computer program PRESTO II, (Performance of Regenerative Superheated Steam Turbine Cycles), provides power engineering companies, including Blount Energy Resources Corporation of Alabama, with the ability to model such features as process steam extraction, induction and feedwater heating by external sources, peaking and high back pressure. Expansion line efficiency, exhaust loss, leakage, mechanical losses and generator losses are used to calculate the cycle heat rate. The generator output program is sufficiently precise that it can be used to verify performance quoted in turbine generator supplier's proposals.
Recent developments in the metal-catalyzed reactions of metallocarbenoids from propargylic esters.
Marco-Contelles, José; Soriano, Elena
2007-01-01
The transition-metal-catalyzed intramolecular cycloisomerization of propargylic carboxylates provides functionalized bicyclo[n.1.0]enol esters in a very diastereoselective manner and, depending on the structure, with partial or complete transfer of chirality from enantiomerically pure precursors. The subsequent methanolysis gives bicyclo[n.1.0] ketones, hence resulting in a very efficient two-step protocol for the syntheses of alpha,beta-unsaturated cyclopropyl ketones, key intermediates for the preparation of natural products. The results from mechanistic computational studies suggest that they probably proceed through cyclopropyl metallocarbenoids, formed by endo-cyclopropanation, that undergo a 1,2-acyl migration. Finally, the potential of the intermolecular reaction and the related pentannulation of propargylic esters bearing pendant aromatic rings are also discussed.
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Synthetic biology as it relates to CAM photosynthesis: challenges and opportunities.
DePaoli, Henrique C; Borland, Anne M; Tuskan, Gerald A; Cushman, John C; Yang, Xiaohan
2014-07-01
To meet future food and energy security needs, which are amplified by increasing population growth and reduced natural resource availability, metabolic engineering efforts have moved from manipulating single genes/proteins to introducing multiple genes and novel pathways to improve photosynthetic efficiency in a more comprehensive manner. Biochemical carbon-concentrating mechanisms such as crassulacean acid metabolism (CAM), which improves photosynthetic, water-use, and possibly nutrient-use efficiency, represent a strategic target for synthetic biology to engineer more productive C3 crops for a warmer and drier world. One key challenge for introducing multigene traits like CAM onto a background of C3 photosynthesis is to gain a better understanding of the dynamic spatial and temporal regulatory events that underpin photosynthetic metabolism. With the aid of systems and computational biology, vast amounts of experimental data encompassing transcriptomics, proteomics, and metabolomics can be related in a network to create dynamic models. Such models can undergo simulations to discover key regulatory elements in metabolism and suggest strategic substitution or augmentation by synthetic components to improve photosynthetic performance and water-use efficiency in C3 crops. Another key challenge in the application of synthetic biology to photosynthesis research is to develop efficient systems for multigene assembly and stacking. Here, we review recent progress in computational modelling as applied to plant photosynthesis, with attention to the requirements for CAM, and recent advances in synthetic biology tool development. Lastly, we discuss possible options for multigene pathway construction in plants with an emphasis on CAM-into-C3 engineering. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
76 FR 59803 - Children's Online Privacy Protection Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...
The ACI-REF Program: Empowering Prospective Computational Researchers
NASA Astrophysics Data System (ADS)
Cuma, M.; Cardoen, W.; Collier, G.; Freeman, R. M., Jr.; Kitzmiller, A.; Michael, L.; Nomura, K. I.; Orendt, A.; Tanner, L.
2014-12-01
The ACI-REF program, Advanced Cyberinfrastructure - Research and Education Facilitation, represents a consortium of academic institutions seeking to further advance the capabilities of their respective campus research communities through an extension of the personal connections and educational activities that underlie the unique and often specialized cyberinfrastructure at each institution. This consortium currently includes Clemson University, Harvard University, University of Hawai'i, University of Southern California, University of Utah, and University of Wisconsin. Working together in a coordinated effort, the consortium is dedicated to the adoption of models and strategies which leverage the expertise and experience of its members with a goal of maximizing the impact of each institution's investment in research computing. The ACI-REFs (facilitators) are tasked with making connections and building bridges between the local campus researchers and the many different providers of campus, commercial, and national computing resources. Through these bridges, ACI-REFs assist researchers from all disciplines in understanding their computing and data needs and in mapping these needs to existing capabilities or providing assistance with development of these capabilities. From the Earth sciences perspective, we will give examples of how this assistance improved methods and workflows in geophysics, geography and atmospheric sciences. We anticipate that this effort will expand the number of researchers who become self-sufficient users of advanced computing resources, allowing them to focus on making research discoveries in a more timely and efficient manner.
CRISPR-FOCUS: A web server for designing focused CRISPR screening experiments.
Cao, Qingyi; Ma, Jian; Chen, Chen-Hao; Xu, Han; Chen, Zhi; Li, Wei; Liu, X Shirley
2017-01-01
The recently developed CRISPR screen technology, based on the CRISPR/Cas9 genome editing system, enables genome-wide interrogation of gene functions in an efficient and cost-effective manner. Although many computational algorithms and web servers have been developed to design single-guide RNAs (sgRNAs) with high specificity and efficiency, algorithms specifically designed for conducting CRISPR screens are still lacking. Here we present CRISPR-FOCUS, a web-based platform to search and prioritize sgRNAs for CRISPR screen experiments. With official gene symbols or RefSeq IDs as the only mandatory input, CRISPR-FOCUS filters and prioritizes sgRNAs based on multiple criteria, including efficiency, specificity, sequence conservation, isoform structure, as well as genomic variations including Single Nucleotide Polymorphisms and cancer somatic mutations. CRISPR-FOCUS also provides pre-defined positive and negative control sgRNAs, as well as other necessary sequences in the construct (e.g., U6 promoters to drive sgRNA transcription and RNA scaffolds of the CRISPR/Cas9). These features allow users to synthesize oligonucleotides directly based on the output of CRISPR-FOCUS. Overall, CRISPR-FOCUS provides a rational and high-throughput approach for sgRNA library design that enables users to efficiently conduct a focused screen experiment targeting up to thousands of genes. (CRISPR-FOCUS is freely available at http://cistrome.org/crispr-focus/).
NASA Astrophysics Data System (ADS)
Di Dato, Mariaines; de Barros, Felipe P. J.; Fiori, Aldo; Bellin, Alberto
2018-03-01
Natural attenuation and in situ oxidation are commonly considered as low-cost alternatives to ex situ remediation. The efficiency of such remediation techniques is hindered by difficulties in obtaining good dilution and mixing of the contaminant, in particular if the plume deformation is physically constrained by an array of wells, which serves as a containment system. In that case, dilution may be enhanced by inducing an engineered sequence of injections and extractions from such pumping system, which also works as a hydraulic barrier. This way, the aquifer acts as a natural mixer, in a manner similar to the industrialized engineered mixers. Improving the efficiency of hydrogeological mixers is a challenging task, owing to the need to use a 3-D setup while relieving the computational burden. Analytical solutions, though approximated, are a suitable and efficient tool to seek the optimum solution among all possible flow configurations. Here we develop a novel physically based model to demonstrate how the combined spatiotemporal fluctuations of the water fluxes control solute trajectories and residence time distributions and therefore, the effectiveness of contaminant plume dilution and mixing. Our results show how external forcing configurations are capable of inducing distinct time-varying groundwater flow patterns which will yield different solute dilution rates.
Sparse Contextual Activation for Efficient Visual Re-Ranking.
Bai, Song; Bai, Xiang
2016-03-01
In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Unsteady 3D flow simulations in cranial arterial tree
NASA Astrophysics Data System (ADS)
Grinberg, Leopold; Anor, Tomer; Madsen, Joseph; Karniadakis, George
2008-11-01
High resolution unsteady 3D flow simulations in major cranial arteries have been performed. Two cases were considered: 1) a healthy volunteer with a complete Circle of Willis (CoW); and 2) a patient with hydrocephalus and an incomplete CoW. Computation was performed on 3344 processors of the new half petaflop supercomputer in TACC. Two new numerical approaches were developed and implemented: 1) a new two-level domain decomposition method, which couples continuous and discontinuous Galerkin discretization of the computational domain; and 2) a new type of outflow boundary conditions, which imposes, in an accurate and computationally efficient manner, clinically measured flow rates. In the first simulation, a geometric model of 65 cranial arteries was reconstructed. Our simulation reveals a high degree of asymmetry in the flow at the left and right parts of the CoW and the presence of swirling flow in most of the CoW arteries. In the second simulation, one of the main findings was a high pressure drop at the right anterior communicating artery (PCA). Due to the incompleteness of the CoW and the pressure drop at the PCA, the right internal carotid artery supplies blood to most regions of the brain.
Topics in Computational Learning Theory and Graph Algorithms.
ERIC Educational Resources Information Center
Board, Raymond Acton
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
5 CFR 531.245 - Computing locality rates and special rates for GM employees.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Computing locality rates and special... Gm Employees § 531.245 Computing locality rates and special rates for GM employees. Locality rates and special rates are computed for GM employees in the same manner as locality rates and special rates...
ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.
Karthikeyan, Muthukumarasamy; Vyas, Renu
2016-01-01
Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.
Divilov, Konstantin; Wiesner-Hanks, Tyr; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I
2017-12-01
Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F 1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × 'Horizon' [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Computer Use by School Teachers in Teaching-Learning Process
ERIC Educational Resources Information Center
Bhalla, Jyoti
2013-01-01
Developing countries have a responsibility not merely to provide computers for schools, but also to foster a habit of infusing a variety of ways in which computers can be integrated in teaching-learning amongst the end users of these tools. Earlier researches lacked a systematic study of the manner and the extent of computer-use by teachers. The…
48 CFR 1816.405-275 - Award fee evaluation scoring.
Code of Federal Regulations, 2010 CFR
2010-10-01
...; exemplary performance in a timely, efficient, and economical manner; very minor (if any) deficiencies with no adverse effect on overall performance. (2) Very good (90-81): Very effective performance, fully... economical manner for the most part; only minor deficiencies. (3) Good (80-71): Effective performance; fully...
Digital Michigan splint - from intraoral scanning to plasterless manufacturing.
Dedem, Philipp; Türp, Jens C
2016-01-01
To investigate whether the fully digital, plasterless fabrication of clinically usable Michigan splints can be accomplished in a time- and cost-efficient manner. Digital scans of the maxillary and mandibular arches of 10 subjects were acquired with an intraoral scanner (3Shape, Copenhagen) and used to generate virtual models of the dental arches. Jaw relation records were made using jigs placed on the subjects' anterior teeth, and silicone registration material was referenced to the jaw models. The data sets were then sent via the company's online portal to the dental laboratory, where computer-aided design (CAD) of the Michigan-type maxillary splints was performed. After receiving the designs, the splints were milled in-office using computer-aided manufacturing (CAM) software, and finished manually. During try-in, the splints where checked for fit, retention quality, and occlusal contacts of the mandibular teeth on the splint surfaces in static and dynamic occlusion. Fit and retention were clinically acceptable in 10 splints and 9 splints, respectively. The number of initial occlusal contacts on the splint surfaces ranged from 4 to 16. The question addressed in this study can be answered in the affirmative. Some of the main advantages of digital manufacturing of Michigan splints over traditional, conventional, impression-based manufacturing are the time-efficient manufacturing process, the high material quality, and the possibility of manufacturing duplicate splints.
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.
2015-12-01
Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.
Student Teachers' Computer Use during Practicum.
ERIC Educational Resources Information Center
Wang, Yu-mei; Holthaus, Patricia
This study was designed to investigate the use of computers by student teachers in their practicums. Student teachers (n=120) in two public universities in the United States answered a questionnaire that covered: the manner and frequency of computer use, student teachers' perception of their training, their attitudes toward the role of the…
2017-04-01
The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.
NASA Technical Reports Server (NTRS)
Bertelrud, Arild; Johnson, Sherylene; Anders, J. B. (Technical Monitor)
2002-01-01
A 2-D (two dimensional) high-lift system experiment was conducted in August of 1996 in the Low Turbulence Pressure Tunnel at NASA Langley Research Center, Hampton, VA. The purpose of the experiment was to obtain transition measurements on a three element high-lift system for CFD (computational fluid dynamics) code validation studies. A transition database has been created using the data from this experiment. The present report details how the hot-film data and the related pressure data are organized in the database. Data processing codes to access the data in an efficient and reliable manner are described and limited examples are given on how to access the database and store acquired information.
Combustion Model of Supersonic Rocket Exhausts in an Entrained Flow Enclosure
NASA Technical Reports Server (NTRS)
Vu, Bruce; Oliveira, Justin
2011-01-01
This paper describes the Computation Fluid Dynamics (CFD) model developed to simulate the supersonic rocket exhaust in an entrained flow cylinder. The model can be used to study the plume-induced environment due to static firing test of the Taurus II launch vehicle. The finite rate chemistry is used to model the combustion process involving rocket propellant (RP 1) and liquid oxidizer (LOX). A similar chemical reacting model is also used to simulate the mixing of rocket plume and ambient air. The model provides detailed information on the gas concentration and other flow parameters within the enclosed region thus allowing different operating scenarios to be examined in an efficient manner. It is shown that the real gas influence is significant and yields better agreement with the theory.
Combustion Model of Supersonic Rocket Exhausts in an Entrained Flow Enclosure
NASA Technical Reports Server (NTRS)
Vu, Bruce T.; Oliveira, Justin
2011-01-01
This paper describes the Computational Fluid Dynamics (CFD) model developed to simulate the supersonic rocket exhaust in an entrained flow cylinder. The model can be used to study the plume-induced environment due to static firing tests of the Taurus-II launch vehicle. The finite-rate chemistry is used to model the combustion process involving rocket propellant (RP-1) and liquid oxidizer (LOX). A similar chemical reacting model is also used to simulate the mixing of rocket plume and ambient air. The model provides detailed information on the gas concentration and other flow parameters within the enclosed region, thus allowing different operating scenarios to be examined in an efficient manner. It is shown that the real gas influence is significant and yields better agreement with the theory.
Shape-driven 3D segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2006-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.
Computers in medicine. Virtual rehabilitation: dream or reality?
Berra, Kathy
2006-08-01
Coronary heart disease is the number one cause of death for men and women in the United States and internationally. Identification of persons at risk for cardiovascular disease and reduction of cardiovascular risk factors are key factors in managing this tremendous societal burden. The internet holds great promise in helping to identify and manage persons at high risk of a cardiac or vascular event. The Internet has the capability to assess risk and provide support for cardiovascular risk reduction for large numbers of persons in a cost effective and time efficient manner. The purpose of this report is to describe important advances in the use of the Internet in identifying persons at risk for a cardiovascular event and in the Internet's ability to provide interventions designed to reduce this risk.
NASA Astrophysics Data System (ADS)
Thakur, Jay Krishna; Singh, Sudhir Kumar; Ekanthalu, Vicky Shettigondahalli
2017-07-01
Integration of remote sensing (RS), geographic information systems (GIS) and global positioning system (GPS) are emerging research areas in the field of groundwater hydrology, resource management, environmental monitoring and during emergency response. Recent advancements in the fields of RS, GIS, GPS and higher level of computation will help in providing and handling a range of data simultaneously in a time- and cost-efficient manner. This review paper deals with hydrological modeling, uses of remote sensing and GIS in hydrological modeling, models of integrations and their need and in last the conclusion. After dealing with these issues conceptually and technically, we can develop better methods and novel approaches to handle large data sets and in a better way to communicate information related with rapidly decreasing societal resources, i.e. groundwater.
Online graphic symbol recognition using neural network and ARG matching
NASA Astrophysics Data System (ADS)
Yang, Bing; Li, Changhua; Xie, Weixing
2001-09-01
This paper proposes a novel method for on-line recognition of line-based graphic symbol. The input strokes are usually warped into a cursive form due to the sundry drawing style, and classifying them is very difficult. To deal with this, an ART-2 neural network is used to classify the input strokes. It has the advantages of high recognition rate, less recognition time and forming classes in a self-organized manner. The symbol recognition is achieved by an Attribute Relational Graph (ARG) matching algorithm. The ARG is very efficient for representing complex objects, but computation cost is very high. To over come this, we suggest a fast graph matching algorithm using symbol structure information. The experimental results show that the proposed method is effective for recognition of symbols with hierarchical structure.
Harris, Courtenay; Straker, Leon; Pollock, Clare; Smith, Anne
2015-01-01
Children's computer use is rapidly growing, together with reports of related musculoskeletal outcomes. Models and theories of adult-related risk factors demonstrate multivariate risk factors associated with computer use. Children's use of computers is different from adult's computer use at work. This study developed and tested a child-specific model demonstrating multivariate relationships between musculoskeletal outcomes, computer exposure and child factors. Using pathway modelling, factors such as gender, age, television exposure, computer anxiety, sustained attention (flow), socio-economic status and somatic complaints (headache and stomach pain) were found to have effects on children's reports of musculoskeletal symptoms. The potential for children's computer exposure to follow a dose-response relationship was also evident. Developing a child-related model can assist in understanding risk factors for children's computer use and support the development of recommendations to encourage children to use this valuable resource in educational, recreational and communication environments in a safe and productive manner. Computer use is an important part of children's school and home life. Application of this developed model, that encapsulates related risk factors, enables practitioners, researchers, teachers and parents to develop strategies that assist young people to use information technology for school, home and leisure in a safe and productive manner.
Energy-efficient neural information processing in individual neurons and neuronal networks.
Yu, Lianchun; Yu, Yuguo
2017-11-01
Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Weller, Herman G.; Hartson, H. Rex
1992-01-01
Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…
NASA Astrophysics Data System (ADS)
Gizon, Laurent; Barucq, Hélène; Duruflé, Marc; Hanson, Chris S.; Leguèbe, Michael; Birch, Aaron C.; Chabassier, Juliette; Fournier, Damien; Hohage, Thorsten; Papini, Emanuele
2017-04-01
Context. Local helioseismology has so far relied on semi-analytical methods to compute the spatial sensitivity of wave travel times to perturbations in the solar interior. These methods are cumbersome and lack flexibility. Aims: Here we propose a convenient framework for numerically solving the forward problem of time-distance helioseismology in the frequency domain. The fundamental quantity to be computed is the cross-covariance of the seismic wavefield. Methods: We choose sources of wave excitation that enable us to relate the cross-covariance of the oscillations to the Green's function in a straightforward manner. We illustrate the method by considering the 3D acoustic wave equation in an axisymmetric reference solar model, ignoring the effects of gravity on the waves. The symmetry of the background model around the rotation axis implies that the Green's function can be written as a sum of longitudinal Fourier modes, leading to a set of independent 2D problems. We use a high-order finite-element method to solve the 2D wave equation in frequency space. The computation is embarrassingly parallel, with each frequency and each azimuthal order solved independently on a computer cluster. Results: We compute travel-time sensitivity kernels in spherical geometry for flows, sound speed, and density perturbations under the first Born approximation. Convergence tests show that travel times can be computed with a numerical precision better than one millisecond, as required by the most precise travel-time measurements. Conclusions: The method presented here is computationally efficient and will be used to interpret travel-time measurements in order to infer, e.g., the large-scale meridional flow in the solar convection zone. It allows the implementation of (full-waveform) iterative inversions, whereby the axisymmetric background model is updated at each iteration.
Computers and Play in Early Childhood: Affordances and Limitations
ERIC Educational Resources Information Center
Verenikina, Irina; Herrington, Jan; Peterson, Rob; Mantei, Jessica
2010-01-01
The widespread proliferation of computer games for children as young as six months of age, merits a reexamination of their manner of use and a review of their facility to provide opportunities for developmental play. This article describes a research study conducted to explore the use of computer games by young children, specifically to…
An Empirical Measure of Computer Security Strength for Vulnerability Remediation
ERIC Educational Resources Information Center
Villegas, Rafael
2010-01-01
Remediating all vulnerabilities on computer systems in a timely and cost effective manner is difficult given that the window of time between the announcement of a new vulnerability and an automated attack has decreased. Hence, organizations need to prioritize the vulnerability remediation process on their computer systems. The goal of this…
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1991-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Discussion summary: Fictitious domain methods
NASA Technical Reports Server (NTRS)
Glowinski, Rowland; Rodrigue, Garry
1991-01-01
Fictitious Domain methods are constructed in the following manner: Suppose a partial differential equation is to be solved on an open bounded set, Omega, in 2-D or 3-D. Let R be a rectangle domain containing the closure of Omega. The partial differential equation is first solved on R. Using the solution on R, the solution of the equation on Omega is then recovered by some procedure. The advantage of the fictitious domain method is that in many cases the solution of a partial differential equation on a rectangular region is easier to compute than on a nonrectangular region. Fictitious domain methods for solving elliptic PDEs on general regions are also very efficient when used on a parallel computer. The reason is that one can use the many domain decomposition methods that are available for solving the PDE on the fictitious rectangular region. The discussion on fictitious domain methods began with a talk by R. Glowinski in which he gave some examples of a variational approach to ficititious domain methods for solving the Helmholtz and Navier-Stokes equations.
Reconstruction of Tissue-Specific Metabolic Networks Using CORDA
Schultz, André; Qutub, Amina A.
2016-01-01
Human metabolism involves thousands of reactions and metabolites. To interpret this complexity, computational modeling becomes an essential experimental tool. One of the most popular techniques to study human metabolism as a whole is genome scale modeling. A key challenge to applying genome scale modeling is identifying critical metabolic reactions across diverse human tissues. Here we introduce a novel algorithm called Cost Optimization Reaction Dependency Assessment (CORDA) to build genome scale models in a tissue-specific manner. CORDA performs more efficiently computationally, shows better agreement to experimental data, and displays better model functionality and capacity when compared to previous algorithms. CORDA also returns reaction associations that can greatly assist in any manual curation to be performed following the automated reconstruction process. Using CORDA, we developed a library of 76 healthy and 20 cancer tissue-specific reconstructions. These reconstructions identified which metabolic pathways are shared across diverse human tissues. Moreover, we identified changes in reactions and pathways that are differentially included and present different capacity profiles in cancer compared to healthy tissues, including up-regulation of folate metabolism, the down-regulation of thiamine metabolism, and tight regulation of oxidative phosphorylation. PMID:26942765
Crystallization mosaic effect generation by superpixels
NASA Astrophysics Data System (ADS)
Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan
2015-03-01
Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.
Closed-loop dialog model of face-to-face communication with a photo-real virtual human
NASA Astrophysics Data System (ADS)
Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás
2004-01-01
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
Low-discrepancy sampling of parametric surface using adaptive space-filling curves (SFC)
NASA Astrophysics Data System (ADS)
Hsu, Charles; Szu, Harold
2014-05-01
Space-Filling Curves (SFCs) are encountered in different fields of engineering and computer science, especially where it is important to linearize multidimensional data for effective and robust interpretation of the information. Examples of multidimensional data are matrices, images, tables, computational grids, and Electroencephalography (EEG) sensor data resulting from the discretization of partial differential equations (PDEs). Data operations like matrix multiplications, load/store operations and updating and partitioning of data sets can be simplified when we choose an efficient way of going through the data. In many applications SFCs present just this optimal manner of mapping multidimensional data onto a one dimensional sequence. In this report, we begin with an example of a space-filling curve and demonstrate how it can be used to find the most similarity using Fast Fourier transform (FFT) through a set of points. Next we give a general introduction to space-filling curves and discuss properties of them. Finally, we consider a discrete version of space-filling curves and present experimental results on discrete space-filling curves optimized for special tasks.
Global magnetohydrodynamic simulations on multiple GPUs
NASA Astrophysics Data System (ADS)
Wong, Un-Hong; Wong, Hon-Cheng; Ma, Yonghui
2014-01-01
Global magnetohydrodynamic (MHD) models play the major role in investigating the solar wind-magnetosphere interaction. However, the huge computation requirement in global MHD simulations is also the main problem that needs to be solved. With the recent development of modern graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA), it is possible to perform global MHD simulations in a more efficient manner. In this paper, we present a global magnetohydrodynamic (MHD) simulator on multiple GPUs using CUDA 4.0 with GPUDirect 2.0. Our implementation is based on the modified leapfrog scheme, which is a combination of the leapfrog scheme and the two-step Lax-Wendroff scheme. GPUDirect 2.0 is used in our implementation to drive multiple GPUs. All data transferring and kernel processing are managed with CUDA 4.0 API instead of using MPI or OpenMP. Performance measurements are made on a multi-GPU system with eight NVIDIA Tesla M2050 (Fermi architecture) graphics cards. These measurements show that our multi-GPU implementation achieves a peak performance of 97.36 GFLOPS in double precision.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.
2007-01-01
Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.
Optimization and Control of Cyber-Physical Vehicle Systems
Bradley, Justin M.; Atkins, Ella M.
2015-01-01
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541
Optimization and Control of Cyber-Physical Vehicle Systems.
Bradley, Justin M; Atkins, Ella M
2015-09-11
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.
Subspace Clustering via Learning an Adaptive Low-Rank Graph.
Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin
2018-08-01
By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.
FIT: statistical modeling tool for transcriptome dynamics under fluctuating field conditions
Iwayama, Koji; Aisaka, Yuri; Kutsuna, Natsumaro
2017-01-01
Abstract Motivation: Considerable attention has been given to the quantification of environmental effects on organisms. In natural conditions, environmental factors are continuously changing in a complex manner. To reveal the effects of such environmental variations on organisms, transcriptome data in field environments have been collected and analyzed. Nagano et al. proposed a model that describes the relationship between transcriptomic variation and environmental conditions and demonstrated the capability to predict transcriptome variation in rice plants. However, the computational cost of parameter optimization has prevented its wide application. Results: We propose a new statistical model and efficient parameter optimization based on the previous study. We developed and released FIT, an R package that offers functions for parameter optimization and transcriptome prediction. The proposed method achieves comparable or better prediction performance within a shorter computational time than the previous method. The package will facilitate the study of the environmental effects on transcriptomic variation in field conditions. Availability and Implementation: Freely available from CRAN (https://cran.r-project.org/web/packages/FIT/). Contact: anagano@agr.ryukoku.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online PMID:28158396
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1990-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
New schemes for internally contracted multi-reference configuration interaction
NASA Astrophysics Data System (ADS)
Wang, Yubin; Han, Huixian; Lei, Yibo; Suo, Bingbing; Zhu, Haiyan; Song, Qi; Wen, Zhenyi
2014-10-01
In this work we present a new internally contracted multi-reference configuration interaction (MRCI) scheme by applying the graphical unitary group approach and the hole-particle symmetry. The latter allows a Distinct Row Table (DRT) to split into a number of sub-DRTs in the active space. In the new scheme a contraction is defined as a linear combination of arcs within a sub-DRT, and connected to the head and tail of the DRT through up-steps and down-steps to generate internally contracted configuration functions. The new scheme deals with the closed-shell (hole) orbitals and external orbitals in the same manner and thus greatly simplifies calculations of coupling coefficients and CI matrix elements. As a result, the number of internal orbitals is no longer a bottleneck of MRCI calculations. The validity and efficiency of the new ic-MRCI code are tested by comparing with the corresponding WK code of the MOLPRO package. The energies obtained from the two codes are essentially identical, and the computational efficiencies of the two codes have their own advantages.
Conditions Database for the Belle II Experiment
NASA Astrophysics Data System (ADS)
Wood, L.; Elsethagen, T.; Schram, M.; Stephan, E.
2017-10-01
The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II conditions database was designed with a straightforward goal: make it as easily maintainable as possible. To this end, HEP-specific software tools were avoided as much as possible and industry standard tools used instead. HTTP REST services were selected as the application interface, which provide a high-level interface to users through the use of standard libraries such as curl. The application interface itself is written in Java and runs in an embedded Payara-Micro Java EE application server. Scalability at the application interface is provided by use of Hazelcast, an open source In-Memory Data Grid (IMDG) providing distributed in-memory computing and supporting the creation and clustering of new application interface instances as demand increases. The IMDG provides fast and efficient access to conditions data via in-memory caching.
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer
NASA Technical Reports Server (NTRS)
Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw
2000-01-01
Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
Efficient decision-making by volume-conserving physical object
NASA Astrophysics Data System (ADS)
Kim, Song-Ju; Aono, Masashi; Nameda, Etsushi
2015-08-01
Decision-making is one of the most important intellectual abilities of not only humans but also other biological organisms, helping their survival. This ability, however, may not be limited to biological systems and may be exhibited by physical systems. Here we demonstrate that any physical object, as long as its volume is conserved when coupled with suitable operations, provides a sophisticated decision-making capability. We consider the multi-armed bandit problem (MBP), the problem of finding, as accurately and quickly as possible, the most profitable option from a set of options that gives stochastic rewards. Efficient MBP solvers are useful for many practical applications, because MBP abstracts a variety of decision-making problems in real-world situations in which an efficient trial-and-error is required. These decisions are made as dictated by a physical object, which is moved in a manner similar to the fluctuations of a rigid body in a tug-of-war (TOW) game. This method, called ‘TOW dynamics’, exhibits higher efficiency than conventional reinforcement learning algorithms. We show analytical calculations that validate statistical reasons for TOW dynamics to produce the high performance despite its simplicity. These results imply that various physical systems in which some conservation law holds can be used to implement an efficient ‘decision-making object’. The proposed scheme will provide a new perspective to open up a physics-based analog computing paradigm and to understanding the biological information-processing principles that exploit their underlying physics.
NASA Astrophysics Data System (ADS)
Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.
2017-08-01
Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically-observable EEG data, with a variety of straightforward computation methods available. This opens possibilities for systematic assessments of ictal discharge propagation in clinical and research settings.
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less
A case study of tuning MapReduce for efficient Bioinformatics in the cloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Lizhen; Wang, Zhong; Yu, Weikuan
The combination of the Hadoop MapReduce programming model and cloud computing allows biological scientists to analyze next-generation sequencing (NGS) data in a timely and cost-effective manner. Cloud computing platforms remove the burden of IT facility procurement and management from end users and provide ease of access to Hadoop clusters. However, biological scientists are still expected to choose appropriate Hadoop parameters for running their jobs. More importantly, the available Hadoop tuning guidelines are either obsolete or too general to capture the particular characteristics of bioinformatics applications. In this paper, we aim to minimize the cloud computing cost spent on bioinformatics datamore » analysis by optimizing the extracted significant Hadoop parameters. When using MapReduce-based bioinformatics tools in the cloud, the default settings often lead to resource underutilization and wasteful expenses. We choose k-mer counting, a representative application used in a large number of NGS data analysis tools, as our study case. Experimental results show that, with the fine-tuned parameters, we achieve a total of 4× speedup compared with the original performance (using the default settings). Finally, this paper presents an exemplary case for tuning MapReduce-based bioinformatics applications in the cloud, and documents the key parameters that could lead to significant performance benefits.« less
Visualization of Morse connection graphs for topologically rich 2D vector fields.
Szymczak, Andrzej; Sipeki, Levente
2013-12-01
Recent advances in vector field topologymake it possible to compute its multi-scale graph representations for autonomous 2D vector fields in a robust and efficient manner. One of these representations is a Morse Connection Graph (MCG), a directed graph whose nodes correspond to Morse sets, generalizing stationary points and periodic trajectories, and arcs - to trajectories connecting them. While being useful for simple vector fields, the MCG can be hard to comprehend for topologically rich vector fields, containing a large number of features. This paper describes a visual representation of the MCG, inspired by previous work on graph visualization. Our approach aims to preserve the spatial relationships between the MCG arcs and nodes and highlight the coherent behavior of connecting trajectories. Using simulations of ocean flow, we show that it can provide useful information on the flow structure. This paper focuses specifically on MCGs computed for piecewise constant (PC) vector fields. In particular, we describe extensions of the PC framework that make it more flexible and better suited for analysis of data on complex shaped domains with a boundary. We also describe a topology simplification scheme that makes our MCG visualizations less ambiguous. Despite the focus on the PC framework, our approach could also be applied to graph representations or topological skeletons computed using different methods.
Learning with Interactive Computer Graphics in the Undergraduate Neuroscience Classroom
Pani, John R.; Chariker, Julia H.; Naaz, Farah; Mattingly, William; Roberts, Joshua; Sephton, Sandra E.
2014-01-01
Instruction of neuroanatomy depends on graphical representation and extended self-study. As a consequence, computer-based learning environments that incorporate interactive graphics should facilitate instruction in this area. The present study evaluated such a system in the undergraduate neuroscience classroom. The system used the method of adaptive exploration, in which exploration in a high fidelity graphical environment is integrated with immediate testing and feedback in repeated cycles of learning. The results of this study were that students considered the graphical learning environment to be superior to typical classroom materials used for learning neuroanatomy. Students managed the frequency and duration of study, test, and feedback in an efficient and adaptive manner. For example, the number of tests taken before reaching a minimum test performance of 90% correct closely approximated the values seen in more regimented experimental studies. There was a wide range of student opinion regarding the choice between a simpler and a more graphically compelling program for learning sectional anatomy. Course outcomes were predicted by individual differences in the use of the software that reflected general work habits of the students, such as the amount of time committed to testing. The results of this introduction into the classroom are highly encouraging for development of computer-based instruction in biomedical disciplines. PMID:24449123
Validation Results for LEWICE 3.0
NASA Technical Reports Server (NTRS)
Wright, William B.
2005-01-01
A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.
The Social Organization of the Computer Underground
1989-08-01
colleagues, with only small groups approaching peer relationships. 14 . SUBJECT TERMS HACK, Computer Underground 15. NUMBER OF PAGES 16. PRICE CODE 17...between individuals involved in a common activity (pp. 13- 14 ). Assessing the degree and manner in which the...criminalized over the past several years . Hackers, and the "danger" that they present in our computer dependent
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
Riaz, Faisal; Niazi, Muaz A
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
Niazi, Muaz A.
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294
Web-PE: Internet-Delivered Prolonged Exposure Therapy for PTSD
2015-10-01
order to meet the growing demand for effective and efficient treatment for posttraumatic stress disorder ( PTSD ) in a timely manner. Web-treatments...posttraumatic stress disorder ( PTSD ) in a timely manner. Effective EBTs for PTSD are available, but barriers to accessing care can deter military...Exposure, combat, psychological treatment, military, psychotherapy, trauma, posttraumatic stress , posttraumatic stress disorder 16. SECURITY
CAZyme discovery and design for sweet dreams.
André, Isabelle; Potocki-Véronèse, Gabrielle; Barbe, Sophie; Moulis, Claire; Remaud-Siméon, Magali
2014-04-01
Development of synthetic routes to complex carbohydrates and glyco-conjugates is often hampered by the lack of enzymes with requisite properties or specificities. Indeed, assembly or degradation of carbohydrates requires carbohydrate-active enzymes (CAZymes) able to act on a vast range of glycosidic monomers, oligomers or polymers in a regio-specific or stereo-specific manner in order to produce the desired structure. Sequence-based analyses allow finding the most original enzymes. Novel screening methods have emerged that enable a more efficient exploitation of the CAZyme diversity found in the microbial world or generated by protein engineering. Computational biology methods also play a prominent role in the success of CAZyme design. Such progress allows circumventing current limitations of carbohydrate synthesis and opens new opportunities related to the synthetic biology field. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development of a Common User Interface for the Launch Decision Support System
NASA Technical Reports Server (NTRS)
Scholtz, Jean C.
1991-01-01
The Launch Decision Support System (LDSS) is software to be used by the NASA Test Director (NTD) in the firing room during countdown. This software is designed to assist the NTD with time management, that is, when to resume from a hold condition. This software will assist the NTD in making and evaluating alternate plans and will keep him advised of the existing situation. As such, the interface to this software must be designed to provide the maximum amount of information in the clearest fashion and in a timely manner. This research involves applying user interface guidelines to a mature prototype of LDSS and developing displays that will enable the users to easily and efficiently obtain information from the LDSS displays. This research also extends previous work on organizing and prioritizing human-computer interaction knowledge.
Shape-Driven 3D Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875
NASA Astrophysics Data System (ADS)
Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela
2016-01-01
Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.
ANSYS UIDL-Based CAE Development of Axial Support System for Optical Mirror
NASA Astrophysics Data System (ADS)
Yang, De-Hua; Shao, Liang
2008-09-01
The Whiffle-tree type axial support mechanism is widely adopted by most relatively large optical mirrors. Based on the secondary developing tools offered by the commonly used Finite Element Anylysis (FEA) software ANSYS, ANSYS Parametric Design Language (APDL) is used for creating the mirror FEA model driven by parameters, and ANSYS User Interface Design Language (UIDL) for generating custom menu of interactive manner, whereby, the relatively independent dedicated Computer Aided Engineering (CAE) module is embedded in ANSYS for calculation and optimization of axial Whiffle-tree support of optical mirrors. An example is also described to illustrate the intuitive and effective usage of the dedicated module by boosting work efficiency and releasing related engineering knowledge of user. The philosophy of secondary-developed special module with commonly used software also suggests itself for product development in other industries.
Paquet, Victor; Joseph, Caroline; D'Souza, Clive
2012-01-01
Anthropometric studies typically require a large number of individuals that are selected in a manner so that demographic characteristics that impact body size and function are proportionally representative of a user population. This sampling approach does not allow for an efficient characterization of the distribution of body sizes and functions of sub-groups within a population and the demographic characteristics of user populations can often change with time, limiting the application of the anthropometric data in design. The objective of this study is to demonstrate how demographically representative user populations can be developed from samples that are not proportionally representative in order to improve the application of anthropometric data in design. An engineering anthropometry problem of door width and clear floor space width is used to illustrate the value of the approach.
Modeling Imperfect Generator Behavior in Power System Operation Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim
A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less
Route planning in a four-dimensional environment
NASA Technical Reports Server (NTRS)
Slack, M. G.; Miller, D. P.
1987-01-01
Robots must be able to function in the real world. The real world involves processes and agents that move independently of the actions of the robot, sometimes in an unpredictable manner. A real-time integrated route planning and spatial representation system for planning routes through dynamic domains is presented. The system will find the safest most efficient route through space-time as described by a set of user defined evaluation functions. Because the route planning algorthims is highly parallel and can run on an SIMD machine in O(p) time (p is the length of a path), the system will find real-time paths through unpredictable domains when used in an incremental mode. Spatial representation, an SIMD algorithm for route planning in a dynamic domain, and results from an implementation on a traditional computer architecture are discussed.
Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data
NASA Astrophysics Data System (ADS)
Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.
2014-09-01
The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the baseline implementation running on the same machine. We will present an overview of the algorithms and results that demonstrate the scalability of our concepts.
Road weather forecast quality analysis
DOT National Transportation Integrated Search
2006-03-01
It is just as important to keep the highways functioning in a safe and efficient manner as it is to construct them in : the first place. Our economy is built around an efficient transportation system. Winter weather plays an important role : in highw...
Adaptive voting computer system
NASA Technical Reports Server (NTRS)
Koczela, L. J.; Wilgus, D. S. (Inventor)
1974-01-01
A computer system is reported that uses adaptive voting to tolerate failures and operates in a fail-operational, fail-safe manner. Each of four computers is individually connected to one of four external input/output (I/O) busses which interface with external subsystems. Each computer is connected to receive input data and commands from the other three computers and to furnish output data commands to the other three computers. An adaptive control apparatus including a voter-comparator-switch (VCS) is provided for each computer to receive signals from each of the computers and permits adaptive voting among the computers to permit the fail-operational, fail-safe operation.
An intelligent multi-media human-computer dialogue system
NASA Technical Reports Server (NTRS)
Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.
1988-01-01
Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.
A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF
NASA Astrophysics Data System (ADS)
Deatrich, D. C.; Liu, S. X.; Tafirout, R.
2010-04-01
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.
Robichaud, Guillaume; Dixon, R. Brent; Potturi, Amarnatha S.; Cassidy, Dan; Edwards, Jack R.; Sohn, Alex; Dow, Thomas A.; Muddiman, David C.
2010-01-01
Through a multi-disciplinary approach, the air amplifier is being evolved as a highly engineered device to improve detection limits of biomolecules when using electrospray ionization. Several key aspects have driven the modifications to the device through experimentation and simulations. We have developed a computer simulation that accurately portrays actual conditions and the results from these simulations are corroborated by the experimental data. These computer simulations can be used to predict outcomes from future designs resulting in a design process that is efficient in terms of financial cost and time. We have fabricated a new device with annular gap control over a range of 50 to 70 μm using piezoelectric actuators. This has enabled us to obtain better aerodynamic performance when compared to the previous design (2× more vacuum) and also more reproducible results. This is allowing us to study a broader experimental space than the previous design which is critical in guiding future directions. This work also presents and explains the principles behind a fractional factorial design of experiments methodology for testing a large number of experimental parameters in an orderly and efficient manner to understand and optimize the critical parameters that lead to obtain improved detection limits while minimizing the number of experiments performed. Preliminary results showed that several folds of improvements could be obtained for certain condition of operations (up to 34 folds). PMID:21499524
NASA Astrophysics Data System (ADS)
Yang, Cheng; Fang, Yi; Zhao, Chao; Zhang, Xin
2018-06-01
A duct acoustics model is an essential component of an impedance eduction technique and its computation cost determines the impedance measurement efficiency. In this paper, a model is developed for the sound propagation through a lined duct carrying a uniform mean flow. In contrast to many existing models, the interface between the liner and the duct field is defined with a modified Ingard-Myers boundary condition that takes account of the effect of the boundary layer above the liner. A mode-matching method is used to couple the unlined and lined duct segments for the model development. For the lined duct segment, the eigenvalue problem resulted from the modified boundary condition is solved by an integration scheme which, on the one hand, allows the lined duct modes to be computed in an efficient manner, and on the other hand, orders the modes automatically. The duct acoustics model developed from the solved lined duct modes is shown to converge more rapidly than the one developed from the rigid-walled duct modes. Validation against the experiment data in the literature shows that the proposed model is able to predict more accurately the liner performance measured by the two-source method. This, however, cannot be made by a duct acoustics model associated with the conventional Ingard-Myers boundary condition. The proposed model has the potential to be integrated into an impedance eduction technique for more reliable liner measurement.
NASA Technical Reports Server (NTRS)
Gedney, Stephen D.; Lansing, Faiza
1993-01-01
The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.
Dynamically protected cat-qubits: a new paradigm for universal quantum computation
NASA Astrophysics Data System (ADS)
Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.
2014-04-01
We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.
ERIC Educational Resources Information Center
Ellingsen, Ryleigh; Clinton, Elias
2017-01-01
This manuscript reviews the empirical literature of the TouchMath© instructional program. The TouchMath© program is a commercial mathematics series that uses a dot notation system to provide multisensory instruction of computation skills. Using the program, students are taught to solve computational tasks in a multisensory manner that does not…
What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?
ERIC Educational Resources Information Center
Cushion, Steve
2006-01-01
We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…
NASA Astrophysics Data System (ADS)
Pathak, Rohit; Joshi, Satyadhar
Within a span of over a decade, India has become one of the most favored destinations across the world for Business Process Outsourcing (BPO) operations. India has rapidly achieved the status of being the most preferred destination for BPO for companies located in the US and Europe. Security and privacy are the two major issues needed to be addressed by the Indian software industry to have an increased and long-term outsourcing contract from the US. Another important issue is about sharing employee’s information to ensure that data and vital information of an outsourcing company is secured and protected. To ensure that the confidentiality of a client’s information is maintained, BPOs need to implement some data security measures. In this paper, we propose a new protocol for specifically for BPO Secure Multi-Party Computation (SMC). As there are many computations and surveys which involve confidential data from many parties or organizations and the concerned data is property of the organization, preservation and security of this data is of prime importance for such type of computations. Although the computation requires data from all the parties, but none of the associated parties would want to reveal their data to the other parties. We have proposed a new efficient and scalable protocol to perform computation on encrypted information. The information is encrypted in a manner that it does not affect the result of the computation. It uses modifier tokens which are distributed among virtual parties, and finally used in the computation. The computation function uses the acquired data and modifier tokens to compute right result from the encrypted data. Thus without revealing the data, right result can be computed and privacy of the parties is maintained. We have given a probabilistic security analysis of hacking the protocol and shown how zero hacking security can be achieved. Also we have analyzed the specific case of Indian BPO.
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2010 CFR
2010-07-01
... above 45.00 (e) Audiovisual documentary materials. Search costs are computed as for any other record... the work. Audiovisual materials provided to a requester need not be in reproducible format or quality... shall be computed in the manner described for audiovisual documentary material. (g) Costs for special...
Qubit absorption refrigerator at strong coupling
NASA Astrophysics Data System (ADS)
Mu, Anqi; Agarwalla, Bijay Kumar; Schaller, Gernot; Segal, Dvira
2017-12-01
We demonstrate that a quantum absorption refrigerator (QAR) can be realized from the smallest quantum system, a qubit, by coupling it in a non-additive (strong) manner to three heat baths. This function is un-attainable for the qubit model under the weak system-bath coupling limit, when the dissipation is additive. In an optimal design, the reservoirs are engineered and characterized by a single frequency component. We then obtain closed expressions for the cooling window and refrigeration efficiency, as well as bounds for the maximal cooling efficiency and the efficiency at maximal power. Our results agree with macroscopic designs and with three-level models for QARs, which are based on the weak system-bath coupling assumption. Beyond the optimal limit, we show with analytical calculations and numerical simulations that the cooling efficiency varies in a non-universal manner with model parameters. Our work demonstrates that strongly-coupled quantum machines can exhibit function that is un-attainable under the weak system-bath coupling assumption.
Conference summary: computers in respiratory care.
Nelson, Steven B
2004-05-01
Computers and data management in respiratory care reflect the larger practices of hospital information systems: the diversity of conference topics provides evidence. Respiratory care computing has shown a steady, slow progression from writing programs that calculate shunt equations to departmental management systems. Wider acceptance and utilization have been stifled by costs, both initial and on-going. Several authors pointed out the savings that were realized from information systems exceeded the costs of implementation and maintenance. The most significant finding from one of the presentations was that no other structure or skilled personnel could provide respiratory care more efficiently or cost-effectively than respiratory therapists. Online information resources have increased, in forms ranging from peer-reviewed journals to corporate-sponsored advertising posing as authoritative treatment regimens. Practitioners and patients need to know how to use these resources as well as how to judge the value of information they present. Departments are using computers for training on a schedule that is more convenient for the staff, providing information in a timely manner and potentially in more useful formats. Portable devices, such as personal digital assistants (PDAs) have improved the ability not only to share data to dispersed locations, but also to collect data at the point of care, thus greatly improving data capture. Ventilators are changing from simple automated bellows to complex systems collecting numerous respiratory parameters and offering feedback to improve ventilation. Clinical databases routinely collect information from a wide variety of resources and can be used for analysis to improve patient outcomes. What could possibly go wrong?
EON: a component-based approach to automation of protocol-directed therapy.
Musen, M A; Tu, S W; Das, A K; Shahar, Y
1996-01-01
Provision of automated support for planning protocol-directed therapy requires a computer program to take as input clinical data stored in an electronic patient-record system and to generate as output recommendations for therapeutic interventions and laboratory testing that are defined by applicable protocols. This paper presents a synthesis of research carried out at Stanford University to model the therapy-planning task and to demonstrate a component-based architecture for building protocol-based decision-support systems. We have constructed general-purpose software components that (1) interpret abstract protocol specifications to construct appropriate patient-specific treatment plans; (2) infer from time-stamped patient data higher-level, interval-based, abstract concepts; (3) perform time-oriented queries on a time-oriented patient database; and (4) allow acquisition and maintenance of protocol knowledge in a manner that facilitates efficient processing both by humans and by computers. We have implemented these components in a computer system known as EON. Each of the components has been developed, evaluated, and reported independently. We have evaluated the integration of the components as a composite architecture by implementing T-HELPER, a computer-based patient-record system that uses EON to offer advice regarding the management of patients who are following clinical trial protocols for AIDS or HIV infection. A test of the reuse of the software components in a different clinical domain demonstrated rapid development of a prototype application to support protocol-based care of patients who have breast cancer. PMID:8930854
Crime or War: Cyberspace Law and Its Implications for Intelligence
2011-02-11
NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON a. REPORT UNCLASSIFED b. ABSTRACT UNCLASSIFED c . THIS PAGE UNCLASSIFED UNLIMITED 38...protected computer or gaining and using information in a manner exceeding authorized access. Robert Morris, a Cornell University computer science...hacker and not the Iranian government. There are hundreds of hackers conducting computer intrusions each day. The previously cited example of Robert
Franckenberg, Sabine; Binder, Thomas; Bolliger, Stephan; Thali, Michael J; Ross, Steffen G
2016-09-01
Cross-sectional imaging, such as computed tomography, has been increasingly implemented in both historic and recent postmortem forensic investigations. It aids in determining cause and manner of death as well as in correlating injuries to possible weapons. This study illuminates the feasibility of reconstructing guns in computed tomography and gives a distinct overview of historic and recent Swiss Army guns.
The Influence of Large-Scale Computing on Aircraft Structural Design.
1986-04-01
the customer in the most cost- effective manner. Computer facility organizations became computer resource power brokers. A good data processing...capabilities generated on other processors can be easily used. This approach is easily implementable and provides a good strategy for using existing...assistance to member nations for the purpose of increasing their scientific and technical potential; - Recommending effective ways for the member nations to
Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel
2002-01-01
This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.
Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions
ERIC Educational Resources Information Center
Nijdam, Justin J.
2013-01-01
A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…
32 CFR 518.20 - Collection of fees and fee rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Audiovisual documentary materials. Search costs are computed as for any other record. Duplication cost is the.... Audiovisual materials provided to a requester need not be in reproducible format or quality. Army audiovisual... any record not described above shall be computed in the manner described for audiovisual documentary...
Managing the Risks Associated with End-User Computing.
ERIC Educational Resources Information Center
Alavi, Maryam; Weiss, Ira R.
1986-01-01
Identifies organizational risks of end-user computing (EUC) associated with different stages of the end-user applications life cycle (analysis, design, implementation). Generic controls are identified that address each of the risks enumerated in a manner that allows EUC management to select those most appropriate to their EUC environment. (5…
Teaching Recognition of Normal and Abnormal Heart Sounds Using Computer-Assisted Instruction
ERIC Educational Resources Information Center
Musselman, Eugene E.; Grimes, George M.
1976-01-01
The computer is being used in an innovative manner to teach the recognition of normal and abnormal canine heart sounds at the University of Chicago. Experience thus far indicates that the PLATO program resources allow the maximum development of the student's proficiency in auscultation. (Editor/LBH)
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.
Method and apparatus using an active ionic liquid for algae biofuel harvest and extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvo, Roberto Di; Reich, Alton; Dykes, Jr., H. Waite H.
The invention relates to use of an active ionic liquid to dissolve algae cell walls. The ionic liquid is used to, in an energy efficient manner, dissolve and/or lyse an algae cell walls, which releases algae constituents used in the creation of energy, fuel, and/or cosmetic components. The ionic liquids include ionic salts having multiple charge centers, low, very low, and ultra low melting point ionic liquids, and combinations of ionic liquids. An algae treatment system is described, which processes wet algae in a lysing reactor, separates out algae constituent products, and optionally recovers the ionic liquid in an energymore » efficient manner.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, V.R.; Mulla, S.A.R.; Rajput, A.M.
1997-06-01
Noncatalytic oxypyrolysis of C{sub 2+}-hydrocarbons from natural gas at 700--850 C in the presence of steam and limited oxygen yields ethylene and propylene with appreciable conversion and high selectivity but with almost no coke or tarlike product formation. In this process, the exothermic oxidative hydrocarbon conversion reactions are coupled directly with the endothermic cracking of C{sub 2+}-hydrocarbons by their simultaneous occurrence. Hence, the process operates in a most energy-efficient and safe (or nonhazardous) manner and also can be made almost thermoneutral or mildly endothermic/exothermic, thus requiring little or no external energy for the hydrocarbon conversion reactions.
Benchmark Lisp And Ada Programs
NASA Technical Reports Server (NTRS)
Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.
1992-01-01
Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.
Optimized coordination of brakes and active steering for a 4WS passenger car.
Tavasoli, Ali; Naraghi, Mahyar; Shakeri, Heman
2012-09-01
Optimum coordination of individual brakes and front/rear steering subsystems is presented. The integrated control strategy consists of three modules. A coordinated high-level control determines the body forces/moment required to achieve vehicle motion objectives. The body forces/moment are allocated to braking and steering subsystems through an intermediate unit, which integrates available subsystems based on phase plane notion in an optimal manner. To this end, an optimization problem including several equality and inequality constraints is defined and solved analytically, such that a real-time implementation can be realized without the use of numeric optimization software. A low-level slip-ratio controller works to generate the desired longitudinal forces at small longitudinal slip-ratios, while averting wheel locking at large slip-ratios. The efficiency of the suggested approach is demonstrated through computer simulations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
An Automated Design Framework for Multicellular Recombinase Logic.
Guiziou, Sarah; Ulliana, Federico; Moreau, Violaine; Leclere, Michel; Bonnet, Jerome
2018-05-18
Tools to systematically reprogram cellular behavior are crucial to address pressing challenges in manufacturing, environment, or healthcare. Recombinases can very efficiently encode Boolean and history-dependent logic in many species, yet current designs are performed on a case-by-case basis, limiting their scalability and requiring time-consuming optimization. Here we present an automated workflow for designing recombinase logic devices executing Boolean functions. Our theoretical framework uses a reduced library of computational devices distributed into different cellular subpopulations, which are then composed in various manners to implement all desired logic functions at the multicellular level. Our design platform called CALIN (Composable Asynchronous Logic using Integrase Networks) is broadly accessible via a web server, taking truth tables as inputs and providing corresponding DNA designs and sequences as outputs (available at http://synbio.cbs.cnrs.fr/calin ). We anticipate that this automated design workflow will streamline the implementation of Boolean functions in many organisms and for various applications.
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
Yunusova, Anastasia M.; Fishman, Veniamin S.; Vasiliev, Gennady V.
2017-01-01
Factor-mediated reprogramming of somatic cells towards pluripotency is a low-efficiency process during which only small subsets of cells are successfully reprogrammed. Previous analyses of the determinants of the reprogramming potential are based on average measurements across a large population of cells or on monitoring a relatively small number of single cells with live imaging. Here, we applied lentiviral genetic barcoding, a powerful tool enabling the identification of familiar relationships in thousands of cells. High-throughput sequencing of barcodes from successfully reprogrammed cells revealed a significant number of barcodes from related cells. We developed a computer model, according to which a probability of synchronous reprogramming of sister cells equals 10–30%. We conclude that the reprogramming success is pre-established in some particular cells and, being a heritable trait, can be maintained through cell division. Thus, reprogramming progresses in a deterministic manner, at least at the level of cell lineages. PMID:28446707
NASA Technical Reports Server (NTRS)
Hidalgo, Homero, Jr.
2000-01-01
An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.
Predictive simulation of bidirectional Glenn shunt using a hybrid blood vessel model.
Li, Hao; Leow, Wee Kheng; Chiu, Ing-Sh
2009-01-01
This paper proposes a method for performing predictive simulation of cardiac surgery. It applies a hybrid approach to model the deformation of blood vessels. The hybrid blood vessel model consists of a reference Cosserat rod and a surface mesh. The reference Cosserat rod models the blood vessel's global bending, stretching, twisting and shearing in a physically correct manner, and the surface mesh models the surface details of the blood vessel. In this way, the deformation of blood vessels can be computed efficiently and accurately. Our predictive simulation system can produce complex surgical results given a small amount of user inputs. It allows the surgeon to easily explore various surgical options and evaluate them. Tests of the system using bidirectional Glenn shunt (BDG) as an application example show that the results produc by the system are similar to real surgical results.
Implementation and application of a gradient enhanced crystal plasticity model
NASA Astrophysics Data System (ADS)
Soyarslan, C.; Perdahcıoǧlu, E. S.; Aşık, E. E.; van den Boogaard, A. H.; Bargmann, S.
2017-10-01
A rate-independent crystal plasticity model is implemented in which description of the hardening of the material is given as a function of the total dislocation density. The evolution of statistically stored dislocations (SSDs) is described using a saturating type evolution law. The evolution of geometrically necessary dislocations (GNDs) on the other hand is described using the gradient of the plastic strain tensor in a non-local manner. The gradient of the incremental plastic strain tensor is computed explicitly during an implicit FE simulation after each converged step. Using the plastic strain tensor stored as state variables at each integration point and an efficient numerical algorithm to find the gradients, the GND density is obtained. This results in a weak coupling of the equilibrium solution and the gradient enhancement. The algorithm is applied to an academic test problem which considers growth of a cylindrical void in a single crystal matrix.
Darré, Leonardo; Machado, Matías Rodrigo; Brandner, Astrid Febe; González, Humberto Carlos; Ferreira, Sebastián; Pantano, Sergio
2015-02-10
Modeling of macromolecular structures and interactions represents an important challenge for computational biology, involving different time and length scales. However, this task can be facilitated through the use of coarse-grained (CG) models, which reduce the number of degrees of freedom and allow efficient exploration of complex conformational spaces. This article presents a new CG protein model named SIRAH, developed to work with explicit solvent and to capture sequence, temperature, and ionic strength effects in a topologically unbiased manner. SIRAH is implemented in GROMACS, and interactions are calculated using a standard pairwise Hamiltonian for classical molecular dynamics simulations. We present a set of simulations that test the capability of SIRAH to produce a qualitatively correct solvation on different amino acids, hydrophilic/hydrophobic interactions, and long-range electrostatic recognition leading to spontaneous association of unstructured peptides and stable structures of single polypeptides and protein-protein complexes.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
Patil, M P; Sonolikar, R L
2008-10-01
This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.
Derikx, Joep P M; Erdkamp, Frans L G; Hoofwijk, A G M
2013-01-01
An electronic health record (EHR) should provide 4 key functionalities: (a) documenting patient data; (b) facilitating computerised provider order entry; (c) displaying the results of diagnostic research; and (d) providing support for healthcare providers in the clinical decision-making process.- Computerised provider order entry into the EHR enables the electronic receipt and transfer of orders to ancillary departments, which can take the place of handwritten orders.- By classifying the computer provider order entries according to disorders, digital care pathways can be created. Such care pathways could result in faster and improved diagnostics.- Communicating by means of an electronic instruction document that is linked to a computerised provider order entry facilitates the provision of healthcare in a safer, more efficient and auditable manner.- The implementation of a full-scale EHR has been delayed as a result of economic, technical and legal barriers, as well as some resistance by physicians.
Lillo, Victor J; Mansilla, Javier; Saá, José M
2018-06-06
Proton transfer is central to the understanding of chemical processes. More so in addition reactions of the type NuH + E → Nu-EH taking place under solvent-free and catalyst-free conditions. Herein we show that the addition of alcohols or amines (the NuH component) to imine derivatives (the E component), in 1 : 1 ratio, under solvent-free and catalyst-free conditions, are efficient methods to access N,O and N,N-acetal derivatives. In addition, computational studies reveal that they are catalyzed reactions involving two or even three NuH molecules operating in a cooperative manner as H-bonded NuH(NuH)nNuH associates (many body effects) in the transition state through a concerted proton shuttling mechanism (addition of alcohols) or stepwise proton shuttling mechanism (addition of amines), thereby facilitating the key proton transfer step.
NASA Astrophysics Data System (ADS)
Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael
2016-08-01
Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.
Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
2017-01-01
Time-dependent Navier-Stokes simulations have been carried out for a flexible UH-60A rotor in forward flight, where the rotor wake interacts with the rotor blades. These flow conditions involved blade vortex interaction and dynamic stall, two common conditions that occur as modern helicopter designs strive to achieve greater flight speeds and payload capacity. These numerical simulations utilized high-order spatial accuracy and delayed detached eddy simulation. Emphasis was placed on understanding how improved rotor wake resolution affects the prediction of the normal force, pitching moment, and chord force of the rotor. Adaptive mesh refinement was used to highly resolve the turbulent rotor wake in a computationally efficient manner. Moreover, blade vortex interaction was found to trigger dynamic stall. Time-dependent flow visualization was utilized to provide an improved understanding of the numerical and physical mechanisms involved with three-dimensional dynamic stall.
Direct measurement of nonlocal entanglement of two-qubit spin quantum states.
Cheng, Liu-Yong; Yang, Guo-Hui; Guo, Qi; Wang, Hong-Fu; Zhang, Shou
2016-01-18
We propose efficient schemes of direct concurrence measurement for two-qubit spin and photon-polarization entangled states via the interaction between single-photon pulses and nitrogen-vacancy (NV) centers in diamond embedded in optical microcavities. For different entangled-state types, diversified quantum devices and operations are designed accordingly. The initial unknown entangled states are possessed by two spatially separated participants, and nonlocal spin (polarization) entanglement can be measured with the aid of detection probabilities of photon (NV center) states. This non-demolition entanglement measurement manner makes initial entangled particle-pair avoid complete annihilation but evolve into corresponding maximally entangled states. Moreover, joint inter-qubit operation or global qubit readout is not required for the presented schemes and the final analyses inform favorable performance under the current parameters conditions in laboratory. The unique advantages of spin qubits assure our schemes wide potential applications in spin-based solid quantum information and computation.
Quhe, Ruge; Nava, Marco; Tiwary, Pratyush; Parrinello, Michele
2015-04-14
We develop a new efficient approach for the simulation of static properties of quantum systems using path integral molecular dynamics in combination with metadynamics. We use the isomorphism between a quantum system and a classical one in which a quantum particle is mapped into a ring polymer. A history dependent biasing potential is built as a function of the elastic energy of the isomorphic polymer. This enhances fluctuations in the shape and size of the necklace in a controllable manner and allows escaping deep energy minima in a limited computer time. In this way, we are able to sample high free energy regions and cross barriers, which would otherwise be insurmountable with unbiased methods. This substantially improves the ability of finding the global free energy minimum as well as exploring other metastable states. The performance of the new technique is demonstrated by illustrative applications on model potentials of varying complexity.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
New Gravity Wave Treatments for GISS Climate Models
NASA Technical Reports Server (NTRS)
Geller, Marvin A.; Zhou, Tiehan; Ruedy, Reto; Aleinov, Igor; Nazarenko, Larissa; Tausnev, Nikolai L.; Sun, Shan; Kelley, Maxwell; Cheng, Ye
2011-01-01
Previous versions of GISS climate models have either used formulations of Rayleigh drag to represent unresolved gravity wave interactions with the model-resolved flow or have included a rather complicated treatment of unresolved gravity waves that, while being climate interactive, involved the specification of a relatively large number of parameters that were not well constrained by observations and also was computationally very expensive. Here, the authors introduce a relatively simple and computationally efficient specification of unresolved orographic and nonorographic gravity waves and their interaction with the resolved flow. Comparisons of the GISS model winds and temperatures with no gravity wave parameterization; with only orographic gravity wave parameterization; and with both orographic and nonorographic gravity wave parameterizations are shown to illustrate how the zonal mean winds and temperatures converge toward observations. The authors also show that the specifications of orographic and nonorographic gravity waves must be different in the Northern and Southern Hemispheres. Then results are presented where the nonorographic gravity wave sources are specified to represent sources from convection in the intertropical convergence zone and spontaneous emission from jet imbalances. Finally, a strategy to include these effects in a climate-dependent manner is suggested.
Schmitz, Ulf; Lai, Xin; Winter, Felix; Wolkenhauer, Olaf; Vera, Julio; Gupta, Shailendra K.
2014-01-01
MicroRNAs (miRNAs) are an integral part of gene regulation at the post-transcriptional level. Recently, it has been shown that pairs of miRNAs can repress the translation of a target mRNA in a cooperative manner, which leads to an enhanced effectiveness and specificity in target repression. However, it remains unclear which miRNA pairs can synergize and which genes are target of cooperative miRNA regulation. In this paper, we present a computational workflow for the prediction and analysis of cooperating miRNAs and their mutual target genes, which we refer to as RNA triplexes. The workflow integrates methods of miRNA target prediction; triplex structure analysis; molecular dynamics simulations and mathematical modeling for a reliable prediction of functional RNA triplexes and target repression efficiency. In a case study we analyzed the human genome and identified several thousand targets of cooperative gene regulation. Our results suggest that miRNA cooperativity is a frequent mechanism for an enhanced target repression by pairs of miRNAs facilitating distinctive and fine-tuned target gene expression patterns. Human RNA triplexes predicted and characterized in this study are organized in a web resource at www.sbi.uni-rostock.de/triplexrna/. PMID:24875477
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Vivas, A. Katherina
2017-12-01
Ongoing and future surveys with repeat imaging in multiple bands are producing (or will produce) time-spaced measurements of brightness, resulting in the identification of large numbers of variable sources in the sky. A large fraction of these are periodic variables: compilations of these are of scientific interest for a variety of purposes. Unavoidably, the data sets from many such surveys not only have sparse sampling, but also have embedded frequencies in the observing cadence that beat against the natural periodicities of any object under investigation. Such limitations can make period determination ambiguous and uncertain. For multiband data sets with asynchronous measurements in multiple passbands, we wish to maximally use the information on periodicity in a manner that is agnostic of differences in the light-curve shapes across the different channels. Given large volumes of data, computational efficiency is also at a premium. This paper develops and presents a computationally economic method for determining periodicity that combines the results from two different classes of period-determination algorithms. The underlying principles are illustrated through examples. The effectiveness of this approach for combining asynchronously sampled measurements in multiple observables that share an underlying fundamental frequency is also demonstrated.
Automated Computerized Analysis of Speechin Psychiatric Disorders
Cohen, Alex S.; Elvevåg, Brita
2014-01-01
Purpose of Review Disturbances in communication are a hallmark of severe mental illnesses. Recent technological advances have paved the way for objectifying communication using automated computerized linguistic and acoustic analysis. We review recent studies applying various computer-based assessments to the natural language produced by adult patients with severe mental illness. Recent Findings Automated computerized methods afford tools with which it is possible to objectively evaluate patients in a reliable, valid and efficient manner that complements human ratings. Crucially, these measures correlate with important clinical measures. The clinical relevance of these novel metrics has been demonstrated by showing their relationship to functional outcome measures, their in vivo link to classic ‘language’ regions in the brain, and, in the case of linguistic analysis, their relationship to candidate genes for severe mental illness. Summary Computer based assessments of natural language afford a framework with which to measure communication disturbances in adults with SMI. Emerging evidence suggests that they can be reliable and valid, and overcome many practical limitations of more traditional assessment methods. The advancement of these technologies offers unprecedented potential for measuring and understanding some of the most crippling symptoms of some of the most debilitating illnesses known to humankind. PMID:24613984
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacón, L., E-mail: chacon@lanl.gov; Chen, G.; Knoll, D.A.
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLOmore » approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Unsteady adjoint for large eddy simulation of a coupled turbine stator-rotor system
NASA Astrophysics Data System (ADS)
Talnikar, Chaitanya; Wang, Qiqi; Laskowski, Gregory
2016-11-01
Unsteady fluid flow simulations like large eddy simulation are crucial in capturing key physics in turbomachinery applications like separation and wake formation in flow over a turbine vane with a downstream blade. To determine how sensitive the design objectives of the coupled system are to control parameters, an unsteady adjoint is needed. It enables the computation of the gradient of an objective with respect to a large number of inputs in a computationally efficient manner. In this paper we present unsteady adjoint solutions for a coupled turbine stator-rotor system. As the transonic fluid flows over the stator vane, the boundary layer transitions to turbulence. The turbulent wake then impinges on the rotor blades, causing early separation. This coupled system exhibits chaotic dynamics which causes conventional adjoint solutions to diverge exponentially, resulting in the corruption of the sensitivities obtained from the adjoint solutions for long-time simulations. In this presentation, adjoint solutions for aerothermal objectives are obtained through a localized adjoint viscosity injection method which aims to stabilize the adjoint solution and maintain accurate sensitivities. Preliminary results obtained from the supercomputer Mira will be shown in the presentation.
Multiscale high-order/low-order (HOLO) algorithms and applications
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.
2017-02-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
Comparison of Two Multidisciplinary Optimization Strategies for Launch-Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Powell, R. W.; Lepsch, R. A.; Stanley, D. O.; Kroo, I. M.
1995-01-01
The investigation focuses on development of a rapid multidisciplinary analysis and optimization capability for launch-vehicle design. Two multidisciplinary optimization strategies in which the analyses are integrated in different manners are implemented and evaluated for solution of a single-stage-to-orbit launch-vehicle design problem. Weights and sizing, propulsion, and trajectory issues are directly addressed in each optimization process. Additionally, the need to maintain a consistent vehicle model across the disciplines is discussed. Both solution strategies were shown to obtain similar solutions from two different starting points. These solutions suggests that a dual-fuel, single-stage-to-orbit vehicle with a dry weight of approximately 1.927 x 10(exp 5)lb, gross liftoff weight of 2.165 x 10(exp 6)lb, and length of 181 ft is attainable. A comparison of the two approaches demonstrates that treatment or disciplinary coupling has a direct effect on optimization convergence and the required computational effort. In comparison with the first solution strategy, which is of the general form typically used within the launch vehicle design community at present, the second optimization approach is shown to he 3-4 times more computationally efficient.
Geoscience after ITPart G. Familiarization with spatial analysis
NASA Astrophysics Data System (ADS)
Pundt, Hardy; Brinkkötter-Runde, Klaus
2000-04-01
Field based and GPS supported GIS are increasingly applied in various spatial disciplines. Such systems represent more sophisticated, time and cost effective tools than traditional field forms for data acquisition. Meanwhile, various systems are on the market. These mostly enable the user to define geo-objects by means of GPS information, supported by functionalities to collect and analyze geometric information. The digital acquisition of application specific attributes is often underrepresented within such systems. This is surprising because pen computer based GIS can be used to collect attributes in a profitable manner, thus adequately supporting the requirements of the user. Visualization and graphic displays of spatial data are helpful means to improve such a data collection process. In section one and two basic aspects of visualization and current uses of visualization techniques for field based GIS are described. Section three mentions new developments within the framework of wearable computing and augmented reality. Section four describes current activities aimed at the realization of real time online field based GIS. This includes efforts to realize an online GIS data link to improve the efficiency and the quality of fieldwork. A brief discussion in section five leads to conclusions and some key issues for future research.
New Gravity Wave Treatments for GISS Climate Models
NASA Technical Reports Server (NTRS)
Geller, Marvin A.; Zhou, Tiehan; Ruedy, Reto; Aleinov, Igor; Nazarenko, Larissa; Tausnev, Nikolai L.; Sun, Shan; Kelley, Maxwell; Cheng, Ye
2010-01-01
Previous versions of GISS climate models have either used formulations of Rayleigh drag to represent unresolved gravity wave interactions with the model resolved flow or have included a rather complicated treatment of unresolved gravity waves that, while being climate interactive, involved the specification of a relatively large number of parameters that were not well constrained by observations and also was computationally very expensive. Here, we introduce a relatively simple and computationally efficient specification of unresolved orographic and non-orographic gravity waves and their interaction with the resolved flow. We show comparisons of the GISS model winds and temperatures with no gravity wave parametrization; with only orographic gravity wave parameterization; and with both orographic and non-orographic gravity wave parameterizations to illustrate how the zonal mean winds and temperatures converge toward observations. We also show that the specifications of orographic and nonorographic gravity waves must be different in the Northern and Southern Hemispheres. We then show results where the non-orographic gravity wave sources are specified to represent sources from convection in the Intertropical Convergence Zone and spontaneous emission from jet imbalances. Finally, we suggest a strategy to include these effects in a climate dependent manner.
Mumtaz, Sidra; Khan, Laiq; Ahmed, Saghir; Bader, Rabiah
2017-01-01
This paper focuses on the indirect adaptive tracking control of renewable energy sources in a grid-connected hybrid power system. The renewable energy systems have low efficiency and intermittent nature due to unpredictable meteorological conditions. The domestic load and the conventional charging stations behave in an uncertain manner. To operate the renewable energy sources efficiently for harvesting maximum power, instantaneous nonlinear dynamics should be captured online. A Chebyshev-wavelet embedded NeuroFuzzy indirect adaptive MPPT (maximum power point tracking) control paradigm is proposed for variable speed wind turbine-permanent synchronous generator (VSWT-PMSG). A Hermite-wavelet incorporated NeuroFuzzy indirect adaptive MPPT control strategy for photovoltaic (PV) system to extract maximum power and indirect adaptive tracking control scheme for Solid Oxide Fuel Cell (SOFC) is developed. A comprehensive simulation test-bed for a grid-connected hybrid power system is developed in Matlab/Simulink. The robustness of the suggested indirect adaptive control paradigms are evaluated through simulation results in a grid-connected hybrid power system test-bed by comparison with conventional and intelligent control techniques. The simulation results validate the effectiveness of the proposed control paradigms.
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-01-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600
Khan, Laiq; Ahmed, Saghir; Bader, Rabiah
2017-01-01
This paper focuses on the indirect adaptive tracking control of renewable energy sources in a grid-connected hybrid power system. The renewable energy systems have low efficiency and intermittent nature due to unpredictable meteorological conditions. The domestic load and the conventional charging stations behave in an uncertain manner. To operate the renewable energy sources efficiently for harvesting maximum power, instantaneous nonlinear dynamics should be captured online. A Chebyshev-wavelet embedded NeuroFuzzy indirect adaptive MPPT (maximum power point tracking) control paradigm is proposed for variable speed wind turbine-permanent synchronous generator (VSWT-PMSG). A Hermite-wavelet incorporated NeuroFuzzy indirect adaptive MPPT control strategy for photovoltaic (PV) system to extract maximum power and indirect adaptive tracking control scheme for Solid Oxide Fuel Cell (SOFC) is developed. A comprehensive simulation test-bed for a grid-connected hybrid power system is developed in Matlab/Simulink. The robustness of the suggested indirect adaptive control paradigms are evaluated through simulation results in a grid-connected hybrid power system test-bed by comparison with conventional and intelligent control techniques. The simulation results validate the effectiveness of the proposed control paradigms. PMID:28877191
NASA Astrophysics Data System (ADS)
Gupta, Shubhank; Panda, Aditi; Naskar, Ruchira; Mishra, Dinesh Kumar; Pal, Snehanshu
2017-11-01
Steels are alloys of iron and carbon, widely used in construction and other applications. The evolution of steel microstructure through various heat treatment processes is an important factor in controlling properties and performance of steel. Extensive experimentations have been performed to enhance the properties of steel by customizing heat treatment processes. However, experimental analyses are always associated with high resource requirements in terms of cost and time. As an alternative solution, we propose an image processing-based technique for refinement of raw plain carbon steel microstructure images, into a digital form, usable in experiments related to heat treatment processes of steel in diverse applications. The proposed work follows the conventional steps practiced by materials engineers in manual refinement of steel images; and it appropriately utilizes basic image processing techniques (including filtering, segmentation, opening, and clustering) to automate the whole process. The proposed refinement of steel microstructure images is aimed to enable computer-aided simulations of heat treatment of plain carbon steel, in a timely and cost-efficient manner; hence it is beneficial for the materials and metallurgy industry. Our experimental results prove the efficiency and effectiveness of the proposed technique.
An exact solution for orbit view-periods from a station on a tri-axial ellipsoidal planet
NASA Technical Reports Server (NTRS)
Tang, C. C. H.
1986-01-01
This paper presents the concise exact solution for predicting view-periods to be observed from a masked or unmasked tracking station on a tri-axial ellipsoidal surface. The new exact approach expresses the azimuth and elevation angles of a spacecraft in terms of the station-centered geodetic topocentric coordinates in an elegantly concise manner. A simple and efficient algorithm is developed to avoid costly repetitive computations in searching for neighborhoods near the rise and set times of each satellite orbit for each station. Only one search for each orbit is necessary for each station. Sample results indicate that the use of an assumed spherical earth instead of an 'actual' tri-axial ellipsoidal earth could introduce an error up to a few minutes in a view-period prediction for circular orbits of low or medium altitude. For an elliptical orbit of high eccentricity and long period, the maximum error could be even larger. The analytic treatment and the efficient algorithm are designed for geocentric orbits, but they should be applicable to interplanetary trajectories by an appropriate coordinates transformation at each view-period calculation. This analysis can be accomplished only by not using the classical orbital elements.
Kanematsu, Nobuyuki
2011-04-01
This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
An exact solution for orbit view-periods from a station on a tri-axial ellipsoidal planet
NASA Astrophysics Data System (ADS)
Tang, C. C. H.
1986-08-01
This paper presents the concise exact solution for predicting view-periods to be observed from a masked or unmasked tracking station on a tri-axial ellipsoidal surface. The new exact approach expresses the azimuth and elevation angles of a spacecraft in terms of the station-centered geodetic topocentric coordinates in an elegantly concise manner. A simple and efficient algorithm is developed to avoid costly repetitive computations in searching for neighborhoods near the rise and set times of each satellite orbit for each station. Only one search for each orbit is necessary for each station. Sample results indicate that the use of an assumed spherical earth instead of an 'actual' tri-axial ellipsoidal earth could introduce an error up to a few minutes in a view-period prediction for circular orbits of low or medium altitude. For an elliptical orbit of high eccentricity and long period, the maximum error could be even larger. The analytic treatment and the efficient algorithm are designed for geocentric orbits, but they should be applicable to interplanetary trajectories by an appropriate coordinates transformation at each view-period calculation. This analysis can be accomplished only by not using the classical orbital elements.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo
2016-12-13
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo
2016-01-01
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-06-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.
77 FR 31616 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-29
...: Generic Clearance for the Collection Customer Satisfaction Surveys; Use: This collection of information is necessary to enable the Agency to garner customer and stakeholder feedback in an efficient, timely manner... customers and stakeholders will help ensure that users have an effective, efficient, and satisfying...
USDA-ARS?s Scientific Manuscript database
Potato virus Y (PVY) strains are transmitted by different aphid species in a non-persistent, non-circulative manner. Green peach aphid (GPA, Myzus persicae Sulzer; Aphididae, Macrosiphini) is the most efficient vector in laboratory studies, but potato aphid (PA, Macrosiphum euphorbiae Thomas; Aphidi...
ERIC Educational Resources Information Center
Kai, Jiang
2009-01-01
Accountability, which is closely related to evaluation of efficiency, effectiveness, and performance, requires proving that higher education has achieved planned results and performance in an effective manner. Highlighting efficiency and effectiveness and emphasizing results and outcomes are the basic characteristics of accountability in higher…
Efficient/reliable dc-to-dc inverter circuit
NASA Technical Reports Server (NTRS)
Pasciutti, E. R.
1970-01-01
Feedback loop, which contains an inductor in series with a saturable reactor, is added to a standard inverter circuit to permit the inverter power transistors to be switched in a controlled and efficient manner. This inverter is applicable where the power source has either high or low impedance properties.
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
Examining ion channel properties using free-energy methods.
Domene, Carmen; Furini, Simone
2009-01-01
Recent advances in structural biology have revealed the architecture of a number of transmembrane channels, allowing for these complex biological systems to be understood in atomistic detail. Computational simulations are a powerful tool by which the dynamic and energetic properties, and thereby the function of these protein architectures, can be investigated. The experimentally observable properties of a system are often determined more by energetic than dynamics, and therefore understanding the underlying free energy (FE) of biophysical processes is of crucial importance. Critical to the accurate evaluation of FE values are the problems of obtaining accurate sampling of complex biological energy landscapes, and of obtaining accurate representations of the potential energy of a system, this latter problem having been addressed through the development of molecular force fields. While these challenges are common to all FE methods, depending on the system under study, and the questions being asked of it, one technique for FE calculation may be preferable to another, the choice of method and simulation protocol being crucial to achieve efficiency. Applied in a correct manner, FE calculations represent a predictive and affordable computational tool with which to make relevant contact with experiments. This chapter, therefore, aims to give an overview of the most widely implemented computational methods used to calculate the FE associated with particular biochemical or biophysical events, and to highlight their recent applications to ion channels. Copyright © 2009 Elsevier Inc. All rights reserved.
Pipelining Architecture of Indexing Using Agglomerative Clustering
NASA Astrophysics Data System (ADS)
Goyal, Deepika; Goyal, Deepti; Gupta, Parul
2010-11-01
The World Wide Web is an interlinked collection of billions of documents. Ironically the huge size of this collection has become an obstacle for information retrieval. To access the information from Internet, search engine is used. Search engine retrieve the pages from indexer. This paper introduce a novel pipelining technique for structuring the core index-building system that substantially reduces the index construction time and also clustering algorithm that aims at partitioning the set of documents into ordered clusters so that the documents within the same cluster are similar and are being assigned the closer document identifiers. After assigning to the clusters it creates the hierarchy of index so that searching is efficient. It will make the super cluster then mega cluster by itself. The pipeline architecture will create the index in such a way that it will be efficient in space and time saving manner. It will direct the search from higher level to lower level of index or higher level of clusters to lower level of cluster so that the user gets the possible match result in time saving manner. As one cluster is making by taking only two clusters so it search is limited to two clusters for lower level of index and so on. So it is efficient in time saving manner.
A High-Order Finite Spectral Volume Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)
2001-01-01
A time accurate, high-order, conservative, yet efficient method named Finite Spectral Volume (FSV) is developed for conservation laws on unstructured grids. The concept of a 'spectral volume' is introduced to achieve high-order accuracy in an efficient manner similar to spectral element and multi-domain spectral methods. In addition, each spectral volume is further sub-divided into control volumes (CVs), and cell-averaged data from these control volumes is used to reconstruct a high-order approximation in the spectral volume. Riemann solvers are used to compute the fluxes at spectral volume boundaries. Then cell-averaged state variables in the control volumes are updated independently. Furthermore, TVD (Total Variation Diminishing) and TVB (Total Variation Bounded) limiters are introduced in the FSV method to remove/reduce spurious oscillations near discontinuities. A very desirable feature of the FSV method is that the reconstruction is carried out only once, and analytically, and is the same for all cells of the same type, and that the reconstruction stencil is always non-singular, in contrast to the memory and CPU-intensive reconstruction in a high-order finite volume (FV) method. Discussions are made concerning why the FSV method is significantly more efficient than high-order finite volume and the Discontinuous Galerkin (DG) methods. Fundamental properties of the FSV method are studied and high-order accuracy is demonstrated for several model problems with and without discontinuities.
The Effect of Boundary Support and Reflector Dimensions on Inflatable Parabolic Antenna Performance
NASA Technical Reports Server (NTRS)
Coleman, Michael J.; Baginski, Frank; Romanofsky, Robert R.
2011-01-01
For parabolic antennas with sufficient surface accuracy, more power can be radiated with a larger aperture size. This paper explores the performance of antennas of various size and reflector depth. The particular focus is on a large inflatable elastic antenna reflector that is supported about its perimeter by a set of elastic tendons and is subjected to a constant hydrostatic pressure. The surface accuracy of the antenna is measured by an RMS calculation, while the reflector phase error component of the efficiency is determined by computing the power density at boresight. In the analysis, the calculation of antenna efficiency is not based on the Ruze Equation. Hence, no assumption regarding the distribution of the reflector surface distortions is presumed. The reflector surface is modeled as an isotropic elastic membrane using a linear stress-strain constitutive relation. Three types of antenna reflector construction are considered: one molded to an ideal parabolic form and two different flat panel design patterns. The flat panel surfaces are constructed by seaming together panels in a manner that the desired parabolic shape is approximately attained after pressurization. Numerical solutions of the model problem are calculated under a variety of conditions in order to estimate the accuracy and efficiency of these antenna systems. In the case of the flat panel constructions, several different cutting patterns are analyzed in order to determine an optimal cutting strategy.
ERIC Educational Resources Information Center
Johnson, James E.; Christie, James F.
2009-01-01
This article examines how play is affected by computers and digital toys. Research indicates that when computer software targeted at children is problem-solving oriented and open-ended, children tend to engage in creative play and interact with peers in a positive manner. On the other hand, drill-and-practice programs can be quite boring and limit…
A Framework for Collaborative and Convenient Learning on Cloud Computing Platforms
ERIC Educational Resources Information Center
Sharma, Deepika; Kumar, Vikas
2017-01-01
The depth of learning resides in collaborative work with more engagement and fun. Technology can enhance collaboration with a higher level of convenience and cloud computing can facilitate this in a cost effective and scalable manner. However, to deploy a successful online learning environment, elementary components of learning pedagogy must be…
Science-Technology Coupling: The Case of Mathematical Logic and Computer Science.
ERIC Educational Resources Information Center
Wagner-Dobler, Roland
1997-01-01
In the history of science, there have often been periods of sudden rapprochements between pure science and technology-oriented branches of science. Mathematical logic as pure science and computer science as technology-oriented science have experienced such a rapprochement, which is studied in this article in a bibliometric manner. (Author)
Securing Emergency State Data in a Tactical Computing Environment
2010-12-01
in a Controlled Manner, 19th IEEE Symposium on Computer-Based Medical Systems (CBMS), 847–854. [38] K. Kifayat, D. Llewellyn - Jones , A. Arabo, O...Drew, M. Merabti, Q. Shi, A. Waller, R. Craddock, G. Jones , State-of-the-Art in System-of-Systems Security for Crisis Management, Fourth Annual
Extinction from a Rationalist Perspective
Gallistel, C. R.
2012-01-01
The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: 1) the failure of partial reinforcement to increase reinforcements to acquisition; 2) the partial reinforcement extinction effect; 3) spontaneous recovery; 4) renewal; 5) reinstatement; 6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. PMID:22391153
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Virtually assisted optical colonoscopy
NASA Astrophysics Data System (ADS)
Marino, Joseph; Qiu, Feng; Kaufman, Arie
2008-03-01
We present a set of tools used to enhance the optical colonoscopy procedure in a novel manner with the aim of improving both the accuracy and efficiency of this procedure. In order to better present the colon information to the gastroenterologist performing a conventional (optical) colonoscopy, we undistort the radial distortion of the fisheye view of the colonoscope. The radial distortion is modeled with a function that converts the fisheye view to the perspective view, where the shape and size of polyps can be more readily observed. The conversion, accelerated on the graphics processing unit and running in real-time, calculates the corresponding position in the fisheye view of each pixel on the perspective image. We also merge our previous work in computer-aided polyp detection for virtual colonoscopy into the optical colonoscopy environment. The physical colonoscope path in the optical colonoscopy is approximated with the hugging corner shortest path, which is correlated with the centerline in the virtual colonoscopy. With the estimated distance that the colonoscope has been inserted, we are able to provide the gastroenterologist with visual cues along the observation path as to the location of possible polyps found by the detection process. In order to present the information to the gastroenterologist in a non-intrusive manner, we have developed a friendly user interface to enhance the optical colonoscopy without being cumbersome, distracting, or resulting in a more lackadaisical inspection by the gastroenterologist.
Domínguez, Joaquín R; Muñoz-Peña, Maria J; González, Teresa; Palo, Patricia; Cuerda-Correa, Eduardo M
2016-10-01
The removal efficiency of four commonly-used parabens by electrochemical advanced oxidation with boron-doped diamond anodes in two different aqueous matrices, namely ultrapure water and surface water from the Guadiana River, has been analyzed. Response surface methodology and a factorial, composite, central, orthogonal, and rotatable (FCCOR) statistical design of experiments have been used to optimize the process. The experimental results clearly show that the initial concentration of pollutants is the factor that influences the removal efficiency in a more remarkable manner in both aqueous matrices. As a rule, as the initial concentration of parabens increases, the removal efficiency decreases. The current density also affects the removal efficiency in a statistically significant manner in both aqueous matrices. In the water river aqueous matrix, a noticeable synergistic effect on the removal efficiency has been observed, probably due to the presence of chloride ions that increase the conductivity of the solution and contribute to the generation of strong secondary oxidant species such as chlorine or HClO/ClO - . The use of a statistical design of experiments made it possible to determine the optimal conditions necessary to achieve total removal of the four parabens in ultrapure and river water aqueous matrices.
NASA Astrophysics Data System (ADS)
Wan, Tian
This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.
2016-12-01
This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Improving Teacher-Prepared Computer Software for Better Language Teaching/Learning.
ERIC Educational Resources Information Center
Rhodes, Frances Gates
A study is reported that examined the relative effectiveness of four computer-assisted-instruction (CAI) manners of presentation and response for teaching irregular verbs to English/Spanish bilingual students in South Texas. Each of 4 types of CAI presentation gave the same 46 selected irregular verbs in context to fifth-graders in 4 subject…
ERIC Educational Resources Information Center
Carrejo, David; Robertson, William H.
2011-01-01
Computer-based mathematical modeling in physics is a process of constructing models of concepts and the relationships between them in the scientific characteristics of work. In this manner, computer-based modeling integrates the interactions of natural phenomenon through the use of models, which provide structure for theories and a base for…
26 CFR 1.179-5 - Time and manner of making election.
Code of Federal Regulations, 2010 CFR
2010-04-01
... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...
Climate Change Discourse in Mass Media: Application of Computer-Assisted Content Analysis
ERIC Educational Resources Information Center
Kirilenko, Andrei P.; Stepchenkova, Svetlana O.
2012-01-01
Content analysis of mass media publications has become a major scientific method used to analyze public discourse on climate change. We propose a computer-assisted content analysis method to extract prevalent themes and analyze discourse changes over an extended period in an objective and quantifiable manner. The method includes the following: (1)…
Brain Activity Associated with Emoticons: An fMRI Study
NASA Astrophysics Data System (ADS)
Yuasa, Masahide; Saito, Keiichi; Mukawa, Naoki
In this paper, we describe that brain activities associated with emoticons by using fMRI. In communication over a computer network, we use abstract faces such as computer graphics (CG) avatars and emoticons. These faces convey users' emotions and enrich their communications. However, the manner in which these faces influence the mental process is as yet unknown. The human brain may perceive the abstract face in an entirely different manner, depending on its level of reality. We conducted an experiment using fMRI in order to investigate the effects of emoticons. The results show that right inferior frontal gyrus, which associated with nonverbal communication, is activated by emoticons. Since the emoticons were created to reflect the real human facial expressions as accurately as possible, we believed that they would activate the right fusiform gyrus. However, this region was not found to be activated during the experiment. This finding is useful in understanding how abstract faces affect our behaviors and decision-making in communication over a computer network.
Cellular computational platform and neurally inspired elements thereof
Okandan, Murat
2016-11-22
A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less
Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)
2001-01-01
Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3. The algorithms were used to optimize the computed RF efficiency of a TWT by determining the phase velocity profile of the slow-wave circuit. The mathematical theory and computational details of the DSSA algorithms will be presented and results will be compared to those obtained with a SA algorithm.
Leveraging the Cloud for Robust and Efficient Lunar Image Processing
NASA Technical Reports Server (NTRS)
Chang, George; Malhotra, Shan; Wolgast, Paul
2011-01-01
The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use encounters with remote resources. As part of this discussion this paper will outline some of the technologies employed, the reasons for their selection, the resulting performance metrics and the direction the project is headed based upon the demonstrated capabilities thus far.
NASA Astrophysics Data System (ADS)
Werner, Brian Thomas
Composite structures have long been used in many industries where it is advantageous to reduce weight while maintaining high stiffness and strength. Composites can now be found in an ever broadening range of applications: sporting equipment, automobiles, marine and aerospace structures, and energy production. These structures are typically sandwich panels composed of fiber reinforced polymer composite (FRPC) facesheets which provide the stiffness and the strength and a low density polymeric foam core that adds bending rigidity with little additional weight. The expanding use of composite structures exposes them to high energy, high velocity dynamic loadings which produce multi-axial dynamic states of stress. This circumstance can present quite a challenge to designers, as composite structures are highly anisotropic and display properties that are sensitive to loading rates. Computer codes are continually in development to assist designers in the creation of safe, efficient structures. While the design of an optimal composite structure is more complex, engineers can take advantage of the effect of enhanced energy dissipation displayed by a composite when loaded at high strain rates. In order to build and verify effective computer codes, the underlying assumptions must be verified by laboratory experiments. Many of these codes look to use a micromechanical approach to determine the response of the structure. For this, the material properties of the constituent materials must be verified, three-dimensional constitutive laws must be developed, and failure of these materials must be investigated under static and dynamic loading conditions. In this study, simple models are sought not only to ease their implementation into such codes, but to allow for efficient characterization of new materials that may be developed. Characterization of composite materials and sandwich structures is a costly, time intensive process. A constituent based design approach evaluates potential combinations of materials in a much faster and more efficient manner.
Tolerance assignment in optical design
NASA Astrophysics Data System (ADS)
Youngworth, Richard Neil
2002-09-01
Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.
Approximation methods for control of structural acoustics models with piezoceramic actuators
NASA Astrophysics Data System (ADS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1993-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input team. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
Experimental Realization of High-Efficiency Counterfactual Computation.
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-21
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Report: Superfund Interagency Agreements
Report #2001-P-00011, June 22, 2001. EPA generally had effective controls in place to ensure its Superfund IAGs achieve expected environmental results in a timely, cost-effective, and efficient manner.
Saving Energy in Historic Buildings: Balancing Efficiency and Value
ERIC Educational Resources Information Center
Cluver, John H.; Randall, Brad
2012-01-01
By now the slogan of the National Trust for Historic Preservation that "the greenest building is the one already built" is widely known. In an era of increased environmental awareness and rising fuel prices, however, the question is how can historic building stock be made more energy efficient in a manner respectful of its historic…
Laboratory hydraulic calibration of the Helley-Smith bedload sediment sampler
Druffel, Leroy; Emmett, W.W.; Schneider, V.R.; Skinner, J.V.
1976-01-01
Filling the sample bag to 40 percent capacity with a sediment larger in diameter than the mesh size of the bag had no effect on the hydraulic efficiency. Particles close to the 0.2 mm mesh size of the sample bag plugged the openings and caused the efficiency to decrease in an undetermined manner.
ERIC Educational Resources Information Center
Sweeney, William; Lee, James; Abid, Nauman; DeMeo, Stephen
2014-01-01
An experiment is described that determines the activation energy (E[subscript a]) of the iodide-catalyzed decomposition reaction of hydrogen peroxide in a much more efficient manner than previously reported in the literature. Hydrogen peroxide, spontaneously or with a catalyst, decomposes to oxygen and water. Because the decomposition reaction is…
Vadose zone flow convergence test suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butcher, B. T.
Performance Assessment (PA) simulations for engineered disposal systems at the Savannah River Site involve highly contrasting materials and moisture conditions at and near saturation. These conditions cause severe convergence difficulties that typically result in unacceptable convergence or long simulation times or excessive analyst effort. Adequate convergence is usually achieved in a trial-anderror manner by applying under-relaxation to the Saturation or Pressure variable, in a series of everdecreasing RELAxation values. SRNL would like a more efficient scheme implemented inside PORFLOW to achieve flow convergence in a more reliable and efficient manner. To this end, a suite of test problems that illustratemore » these convergence problems is provided to facilitate diagnosis and development of an improved convergence strategy. The attached files are being transmitted to you describing the test problem and proposed resolution.« less
ERIC Educational Resources Information Center
Baker, Catherine M.
2017-01-01
Teaching people with disabilities tech skills empowers them to create solutions to problems they encounter and prepares them for careers. However, computer science is typically taught in a highly visual manner which can present barriers for people who are blind. The goal of this dissertation is to understand and decrease those barriers. The first…
Computer Conferencing in Mathematics Classrooms: Distance Education--The Long and the Short of It.
ERIC Educational Resources Information Center
Lamb, Charles E.; Klemm, William R.
One of the major goals of mathematics education reform efforts is for students to become more confident in their abilities. This paper suggests that computer conferencing provides a way to change classroom practice so that students can work together in a self-paced manner that builds self-esteem and confidence in mathematics. A pedagogical…
An Alternative Method for Computing Unit Costs and Productivity Ratios. AIR 1984 Annual Forum Paper.
ERIC Educational Resources Information Center
Winstead, Wayland H.; And Others
An alternative measure for evaluating the performance of academic departments was studied. A comparison was made with the traditional manner for computing unit costs and productivity ratios: prorating the salary and effort of each faculty member to each course level based on the personal mix of course taught. The alternative method used averaging…
ERIC Educational Resources Information Center
Vest, David; Tajchman, Ron
A study explained the manner in which a computer-assisted tutorial was built and assessed the utility of the courseware. The tutorial was designed to demonstrate the efficacy of good organization in informing the audience about a topic and provide appropriate models for the presentation of the well-organized informative speech. The topic of the…
Contextual classification of multispectral image data: Approximate algorithm
NASA Technical Reports Server (NTRS)
Tilton, J. C. (Principal Investigator)
1980-01-01
An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.
Extinction from a rationalist perspective.
Gallistel, C R
2012-05-01
The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: (1) the failure of partial reinforcement to increase reinforcements to acquisition; (2) the partial reinforcement extinction effect; (3) spontaneous recovery; (4) renewal; (5) reinstatement; (6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. Copyright © 2012 Elsevier B.V. All rights reserved.
Induction Consolidation of Thermoplastic Composites Using Smart Susceptors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsen, Marc R
2012-06-14
This project has focused on the area of energy efficient consolidation and molding of fiber reinforced thermoplastic composite components as an energy efficient alternative to the conventional processing methods such as autoclave processing. The expanding application of composite materials in wind energy, automotive, and aerospace provides an attractive energy efficiency target for process development. The intent is to have this efficient processing along with the recyclable thermoplastic materials ready for large scale application before these high production volume levels are reached. Therefore, the process can be implemented in a timely manner to realize the maximum economic, energy, and environmental efficiencies.more » Under this project an increased understanding of the use of induction heating with smart susceptors applied to consolidation of thermoplastic has been achieved. This was done by the establishment of processing equipment and tooling and the subsequent demonstration of this fabrication technology by consolidating/molding of entry level components for each of the participating industrial segments, wind energy, aerospace, and automotive. This understanding adds to the nation's capability to affordably manufacture high quality lightweight high performance components from advanced recyclable composite materials in a lean and energy efficient manner. The use of induction heating with smart susceptors is a precisely controlled low energy method for the consolidation and molding of thermoplastic composites. The smart susceptor provides intrinsic thermal control based on the interaction with the magnetic field from the induction coil thereby producing highly repeatable processing. The low energy usage is enabled by the fact that only the smart susceptor surface of the tool is heated, not the entire tool. Therefore much less mass is heated resulting in significantly less required energy to consolidate/mold the desired composite components. This energy efficiency results in potential energy savings of {approx}75% as compared to autoclave processing in aerospace, {approx}63% as compared to compression molding in automotive, and {approx}42% energy savings as compared to convectively heated tools in wind energy. The ability to make parts in a rapid and controlled manner provides significant economic advantages for each of the industrial segments. These attributes were demonstrated during the processing of the demonstration components on this project.« less
NASA Astrophysics Data System (ADS)
Laforest, Martin
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.
Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui
2017-07-15
New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier Inc.
Code of Federal Regulations, 2011 CFR
2011-01-01
... efficient manner. High risk personal property will be managed throughout its life cycle so as to protect... transferred or disposed unless it receives a high risk assessment and is handled accordingly. ...
NASA Astrophysics Data System (ADS)
Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.
2014-12-01
The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.
NASA Astrophysics Data System (ADS)
Tahavvor, Ali Reza
2017-03-01
In the present study artificial neural network and fractal geometry are used to predict frost thickness and density on a cold flat plate having constant surface temperature under forced convection for different ambient conditions. These methods are very applicable in this area because phase changes such as melting and solidification are simulated by conventional methods but frost formation is a most complicated phase change phenomenon consists of coupled heat and mass transfer. Therefore conventional mathematical techniques cannot capture the effects of all parameters on its growth and development because this process influenced by many factors and it is a time dependent process. Therefore, in this work soft computing method such as artificial neural network and fractal geometry are used to do this manner. The databases for modeling are generated from the experimental measurements. First, multilayer perceptron network is used and it is found that the back-propagation algorithm with Levenberg-Marquardt learning rule is the best choice to estimate frost growth properties due to accurate and faster training procedure. Second, fractal geometry based on the Von-Koch curve is used to model frost growth procedure especially in frost thickness and density. Comparison is performed between experimental measurements and soft computing methods. Results show that soft computing methods can be used more efficiently to determine frost properties over a flat plate. Based on the developed models, wide range of frost formation over flat plates can be determined for various conditions.
A computable expression of closure to efficient causation.
Mossio, Matteo; Longo, Giuseppe; Stewart, John
2009-04-07
In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.
Clinical and mathematical introduction to computer processing of scintigraphic images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goris, M.L.; Briandet, P.A.
The authors state in their preface:''...we believe that there is no book yet available in which computing in nuclear medicine has been approached in a reasonable manner. This book is our attempt to correct the situation.'' The book is divided into four sections: (1) Clinical Applications of Quantitative Scintigraphic Analysis; (2) Mathematical Derivations; (3) Processing Methods of Scintigraphic Images; and (4) The (Computer) System. Section 1 has chapters on quantitative approaches to congenital and acquired heart diseases, nephrology and urology, and pulmonary medicine.
Efficient exact motif discovery.
Marschall, Tobias; Rahmann, Sven
2009-06-15
The motif discovery problem consists of finding over-represented patterns in a collection of biosequences. It is one of the classical sequence analysis problems, but still has not been satisfactorily solved in an exact and efficient manner. This is partly due to the large number of possibilities of defining the motif search space and the notion of over-representation. Even for well-defined formalizations, the problem is frequently solved in an ad hoc manner with heuristics that do not guarantee to find the best motif. We show how to solve the motif discovery problem (almost) exactly on a practically relevant space of IUPAC generalized string patterns, using the p-value with respect to an i.i.d. model or a Markov model as the measure of over-representation. In particular, (i) we use a highly accurate compound Poisson approximation for the null distribution of the number of motif occurrences. We show how to compute the exact clump size distribution using a recently introduced device called probabilistic arithmetic automaton (PAA). (ii) We define two p-value scores for over-representation, the first one based on the total number of motif occurrences, the second one based on the number of sequences in a collection with at least one occurrence. (iii) We describe an algorithm to discover the optimal pattern with respect to either of the scores. The method exploits monotonicity properties of the compound Poisson approximation and is by orders of magnitude faster than exhaustive enumeration of IUPAC strings (11.8 h compared with an extrapolated runtime of 4.8 years). (iv) We justify the use of the proposed scores for motif discovery by showing our method to outperform other motif discovery algorithms (e.g. MEME, Weeder) on benchmark datasets. We also propose new motifs on Mycobacterium tuberculosis. The method has been implemented in Java. It can be obtained from http://ls11-www.cs.tu-dortmund.de/people/marschal/paa_md/.
Intelligent Controls for Net-Zero Energy Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Haorong; Cho, Yong; Peng, Dongming
2011-10-30
The goal of this project is to develop and demonstrate enabling technologies that can empower homeowners to convert their homes into net-zero energy buildings in a cost-effective manner. The project objectives and expected outcomes are as follows: • To develop rapid and scalable building information collection and modeling technologies that can obtain and process “as-built” building information in an automated or semiautomated manner. • To identify low-cost measurements and develop low-cost virtual sensors that can monitor building operations in a plug-n-play and low-cost manner. • To integrate and demonstrate low-cost building information modeling (BIM) technologies. • To develop decision supportmore » tools which can empower building owners to perform energy auditing and retrofit analysis. • To develop and demonstrate low-cost automated diagnostics and optimal control technologies which can improve building energy efficiency in a continual manner.« less
Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes
Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.
2015-01-01
Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529
MRF energy minimization and beyond via dual decomposition.
Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios
2011-03-01
This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.
Distributing french seismologic data through the RESIF green IT datacentre
NASA Astrophysics Data System (ADS)
Volcke, P.; Gueguen, P.; Pequegnat, C.; Le Tanou, J.; Enderle, G.; Berthoud, F.
2012-12-01
RESIF is a nationwide french project aimed at building an excellent quality system to observe and understand the inner earth. The ultimate goal is to create a network throughout mainland France comprising 750 seismometers and geodetic measurement instruments, 250 of which will be mobile to enable the observation network to be focused on specific investigation subjects and geographic locations. This project includes the implementation of a data distribution centre hosting seismologic and geodetic data. This datacentre is operated by the Université Joseph Fourier, Grenoble, France. In the context of building the necessary computing infrastructure, the Université Joseph Fourier became the first french university earning the status of "Participant" for the European Union "Code of Conduct for Data Centres". The University commits to energy reporting and implementing best practices for energy efficiency, in a cost effective manner, without hampering mission critical functions. In this context, data currently hosted at the RESIF datacentre include data from french broadband permanent network, strong motion permanent network, and mobile seismological network. These data are freely accessible as realtime streams and continuous validated data, along with instrumental metadata, delivered using widely known formats. Futur developments include tight integration with local super-computing ressources, and setting up modern distribution systems like webservices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
High-Performance Reactive Particle Tracking with Adaptive Representation
NASA Astrophysics Data System (ADS)
Schmidt, M.; Benson, D. A.; Pankavich, S.
2017-12-01
Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.
A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu
The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutralmore » physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.« less
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
Efficient Evaluation Functions for Multi-Rover Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Tumer, Kagan
2004-01-01
Evolutionary computation can be a powerful tool in cresting a control policy for a single agent receiving local continuous input. This paper extends single-agent evolutionary computation to multi-agent systems, where a collection of agents strives to maximize a global fitness evaluation function that rates the performance of the entire system. This problem is solved in a distributed manner, where each agent evolves its own population of neural networks that are used as the control policies for the agent. Each agent evolves its population using its own agent-specific fitness evaluation function. We propose to create these agent-specific evaluation functions using the theory of collectives to avoid the coordination problem where each agent evolves a population that maximizes its own fitness function, yet the system has a whole achieves low values of the global fitness function. Instead we will ensure that each fitness evaluation function is both "aligned" with the global evaluation function and is "learnable," i.e., the agents can readily see how their behavior affects their evaluation function. We then show how these agent-specific evaluation functions outperform global evaluation methods by up to 600% in a domain where a set of rovers attempt to maximize the amount of information observed while navigating through a simulated environment.
CCS_WHMS: A Congestion Control Scheme for Wearable Health Management System.
Kafi, Mohamed Amine; Ben Othman, Jalel; Bagaa, Miloud; Badache, Nadjib
2015-12-01
Wearable computing is becoming a more and more attracting field in the last years thanks to the miniaturisation of electronic devices. Wearable healthcare monitoring systems (WHMS) as an important client of wearable computing technology has gained a lot. Indeed, the wearable sensors and their surrounding healthcare applications bring a lot of benefits to patients, elderly people and medical staff, so facilitating their daily life quality. But from a research point of view, there is still work to accomplish in order to overcome the gap between hardware and software parts. In this paper, we target the problem of congestion control when all these healthcare sensed data have to reach the destination in a reliable manner that avoids repetitive transmission which wastes precious energy or leads to loss of important information in emergency cases, too. We propose a congestion control scheme CCS_WHMS that ensures efficient and fair data delivery while used in the body wearable system part or in the multi-hop inter bodies wearable ones to get the destination. As the congestion detection paradigm is very important in the control process, we do experimental tests to compare between state of the art congestion detection methods, using MICAz motes, in order to choose the appropriate one for our scheme.
Multiscale high-order/low-order (HOLO) algorithms and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...
2016-11-11
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
On fair, effective and efficient REDD mechanism design
2009-01-01
The issues surrounding 'Reduced Emissions from Deforestation and Forest Degradation' (REDD) have become a major component of continuing negotiations under the United Nations Framework Convention on Climate Change (UNFCCC). This paper aims to address two key requirements of any potential REDD mechanism: first, the generation of measurable, reportable and verifiable (MRV) REDD credits; and secondly, the sustainable and efficient provision of emission reductions under a robust financing regime. To ensure the supply of MRV credits, we advocate the establishment of an 'International Emission Reference Scenario Coordination Centre' (IERSCC). The IERSCC would act as a global clearing house for harmonized data to be used in implementing reference level methodologies. It would be tasked with the collection, reporting and subsequent processing of earth observation, deforestation- and degradation driver information in a globally consistent manner. The IERSCC would also assist, coordinate and supervise the computation of national reference scenarios according to rules negotiated under the UNFCCC. To overcome the threats of "market flooding" on the one hand and insufficient economic incentives for REDD on the other hand, we suggest an 'International Investment Reserve' (IIR) as REDD financing framework. In order to distribute the resources of the IIR we propose adopting an auctioning mechanism. Auctioning not only reveals the true emission reduction costs, but might also allow for incentivizing the protection of biodiversity and socio-economic values. The introduced concepts will be vital to ensure robustness, environmental integrity and economic efficiency of the future REDD mechanism. PMID:19943927
Event-driven processing for hardware-efficient neural spike sorting
NASA Astrophysics Data System (ADS)
Liu, Yan; Pereira, João L.; Constandinou, Timothy G.
2018-02-01
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
Code of Federal Regulations, 2010 CFR
2010-04-01
... acceptable. Also, the standards are designed to improve overall State agency performance in the disability... efficient manner. We measure the performance of a State agency in two areas—processing time and quality of...
Code of Federal Regulations, 2010 CFR
2010-04-01
... acceptable. Also, the standards are designed to improve overall State agency performance in the disability... efficient manner. We measure the performance of a State agency in two areas—processing time and quality of...
50 CFR 29.32 - Mineral rights reserved and excepted.
Code of Federal Regulations, 2013 CFR
2013-10-01
... practicable, conduct all exploration, development, and production operations in such a manner as to prevent... to the minimum space compatible with the conduct of efficient mineral operations. Persons conducting...
50 CFR 29.32 - Mineral rights reserved and excepted.
Code of Federal Regulations, 2010 CFR
2010-10-01
... practicable, conduct all exploration, development, and production operations in such a manner as to prevent... to the minimum space compatible with the conduct of efficient mineral operations. Persons conducting...
50 CFR 29.32 - Mineral rights reserved and excepted.
Code of Federal Regulations, 2014 CFR
2014-10-01
... practicable, conduct all exploration, development, and production operations in such a manner as to prevent... to the minimum space compatible with the conduct of efficient mineral operations. Persons conducting...
50 CFR 29.32 - Mineral rights reserved and excepted.
Code of Federal Regulations, 2011 CFR
2011-10-01
... practicable, conduct all exploration, development, and production operations in such a manner as to prevent... to the minimum space compatible with the conduct of efficient mineral operations. Persons conducting...
50 CFR 29.32 - Mineral rights reserved and excepted.
Code of Federal Regulations, 2012 CFR
2012-10-01
... practicable, conduct all exploration, development, and production operations in such a manner as to prevent... to the minimum space compatible with the conduct of efficient mineral operations. Persons conducting...
45 CFR 1310.17 - Driver and bus monitor training.
Code of Federal Regulations, 2010 CFR
2010-10-01
... safe and efficient manner; (2) safely run a fixed route, including loading and unloading children... first aid in case of injury; (4) handle emergency situations, including vehicle evacuation procedures...
The impact of Docker containers on the performance of genomic pipelines
Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L.; Notredame, Cedric
2015-01-01
Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time. PMID:26421241
Target Trailing With Safe Navigation With Colregs for Maritime Autonomous Surface Vehicles
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki (Inventor); Aghazarian, Hrand (Inventor); Huntsberger, Terrance L. (Inventor); Howard, Andrew B. (Inventor); Wolf, Michael T. (Inventor); Zarzhitsky, Dimitri V. (Inventor)
2014-01-01
Systems and methods for operating autonomous waterborne vessels in a safe manner. The systems include hardware for identifying the locations and motions of other vessels, as well as the locations of stationary objects that represent navigation hazards. By applying a computational method that uses a maritime navigation algorithm for avoiding hazards and obeying COLREGS using Velocity Obstacles to the data obtained, the autonomous vessel computes a safe and effective path to be followed in order to accomplish a desired navigational end result, while operating in a manner so as to avoid hazards and to maintain compliance with standard navigational procedures defined by international agreement. The systems and methods have been successfully demonstrated on water with radar and stereo cameras as the perception sensors, and integrated with a higher level planner for trailing a maneuvering target.
The impact of Docker containers on the performance of genomic pipelines.
Di Tommaso, Paolo; Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L; Notredame, Cedric
2015-01-01
Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time.
Ortiz de Solorzano, Isabel; Uson, Laura; Larrea, Ane; Miana, Mario; Sebastian, Victor; Arruebo, Manuel
2016-01-01
By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid) nanoparticles (NPs) can be produced in a continuous manner and at a large scale (~10 g/h). An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid) NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification–evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates. PMID:27524896
Ortiz de Solorzano, Isabel; Uson, Laura; Larrea, Ane; Miana, Mario; Sebastian, Victor; Arruebo, Manuel
2016-01-01
By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid) nanoparticles (NPs) can be produced in a continuous manner and at a large scale (~10 g/h). An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid) NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification-evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates.
Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark
2014-06-01
The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.
NASA Astrophysics Data System (ADS)
Tanay, Sashwat; Haney, Maria; Gopakumar, Achamveedu
2016-03-01
Inspiraling compact binaries with non-negligible orbital eccentricities are plausible gravitational wave (GW) sources for the upcoming network of GW observatories. In this paper, we present two prescriptions to compute post-Newtonian (PN) accurate inspiral templates for such binaries. First, we adapt and extend the postcircular scheme of Yunes et al. [Phys. Rev. D 80, 084001 (2009)] to obtain a Fourier-domain inspiral approximant that incorporates the effects of PN-accurate orbital eccentricity evolution. This results in a fully analytic frequency-domain inspiral waveform with Newtonian amplitude and 2PN-order Fourier phase while incorporating eccentricity effects up to sixth order at each PN order. The importance of incorporating eccentricity evolution contributions to the Fourier phase in a PN-consistent manner is also demonstrated. Second, we present an accurate and efficient prescription to incorporate orbital eccentricity into the quasicircular time-domain TaylorT4 approximant at 2PN order. New features include the use of rational functions in orbital eccentricity to implement the 1.5PN-order tail contributions to the far-zone fluxes. This leads to closed form PN-accurate differential equations for evolving eccentric orbits, and the resulting time-domain approximant is accurate and efficient to handle initial orbital eccentricities ≤0.9 . Preliminary GW data analysis implications are probed using match estimates.
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
kWIP: The k-mer weighted inner product, a de novo estimator of genetic similarity.
Murray, Kevin D; Webers, Christfried; Ong, Cheng Soon; Borevitz, Justin; Warthmann, Norman
2017-09-01
Modern genomics techniques generate overwhelming quantities of data. Extracting population genetic variation demands computationally efficient methods to determine genetic relatedness between individuals (or "samples") in an unbiased manner, preferably de novo. Rapid estimation of genetic relatedness directly from sequencing data has the potential to overcome reference genome bias, and to verify that individuals belong to the correct genetic lineage before conclusions are drawn using mislabelled, or misidentified samples. We present the k-mer Weighted Inner Product (kWIP), an assembly-, and alignment-free estimator of genetic similarity. kWIP combines a probabilistic data structure with a novel metric, the weighted inner product (WIP), to efficiently calculate pairwise similarity between sequencing runs from their k-mer counts. It produces a distance matrix, which can then be further analysed and visualised. Our method does not require prior knowledge of the underlying genomes and applications include establishing sample identity and detecting mix-up, non-obvious genomic variation, and population structure. We show that kWIP can reconstruct the true relatedness between samples from simulated populations. By re-analysing several published datasets we show that our results are consistent with marker-based analyses. kWIP is written in C++, licensed under the GNU GPL, and is available from https://github.com/kdmurray91/kwip.
An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai
2012-01-01
Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Piaoran; Cao, Peng -Fei; Su, Zhe
Here, utilization of a flow reactor under high pressure allows highly efficient polymer synthesis via reversible addition–fragmentation chain-transfer (RAFT) polymerization in an aqueous system. Compared with the batch reaction, the flow reactor allows the RAFT polymerization to be performed in a high-efficiency manner at the same temperature. The adjustable pressure of the system allows further elevation of the reaction temperature and hence faster polymerization. Other reaction parameters, such as flow rate and initiator concentration, were also well studied to tune the monomer conversion and the molar mass dispersity (Ð) of the obtained polymers. Gel permeation chromatography, nuclear magnetic resonance (NMR),more » and Fourier transform infrared spectroscopies (FTIR) were utilized to monitor the polymerization process. With the initiator concentration of 0.15 mmol L –1, polymerization of poly(ethylene glycol) methyl ethermethacrylate with monomer conversion of 52% at 100 °C under 73 bar can be achieved within 40 min with narrow molar mass dispersity (D) Ð (<1.25). The strategy developed here provides a method to produce well-defined polymers via RAFT polymerization with high efficiency in a continuous manner.« less
NASA Astrophysics Data System (ADS)
Mavromoustakis, Constandinos X.; Karatza, Helen D.
2010-06-01
While sharing resources the efficiency is substantially degraded as a result of the scarceness of availability of the requested resources in a multiclient support manner. These resources are often aggravated by many factors like the temporal constraints for availability or node flooding by the requested replicated file chunks. Thus replicated file chunks should be efficiently disseminated in order to enable resource availability on-demand by the mobile users. This work considers a cross layered middleware support system for efficient delay-sensitive streaming by using each device's connectivity and social interactions in a cross layered manner. The collaborative streaming is achieved through the epidemically replicated file chunk policy which uses a transition-based approach of a chained model of an infectious disease with susceptible, infected, recovered and death states. The Gossip-based stateful model enforces the mobile nodes whether to host a file chunk or not or, when no longer a chunk is needed, to purge it. The proposed model is thoroughly evaluated through experimental simulation taking measures for the effective throughput Eff as a function of the packet loss parameter in contrast with the effectiveness of the replication Gossip-based policy.
Bio-inspired multistructured conical copper wires for highly efficient liquid manipulation.
Wang, Qianbin; Meng, Qingan; Chen, Ming; Liu, Huan; Jiang, Lei
2014-09-23
Animal hairs are typical structured conical fibers ubiquitous in natural system that enable the manipulation of low viscosity liquid in a well-controlled manner, which serves as the fundamental structure in Chinese brush for ink delivery in a controllable manner. Here, drawing inspiration from these structure, we developed a dynamic electrochemical method that enables fabricating the anisotropic multiscale structured conical copper wire (SCCW) with controllable conicity and surface morphology. The as-prepared SCCW exhibits a unique ability for manipulating liquid with significantly high efficiency, and over 428 times greater than its own volume of liquid could be therefore operated. We propose that the boundary condition of the dynamic liquid balance behavior on conical fibers, namely, steady holding of liquid droplet at the tip region of the SCCW, makes it an excellent fibrous medium to manipulate liquid. Moreover, we demonstrate that the titling angle of the SCCW can also affect its efficiency of liquid manipulation by virtue of its mechanical rigidity, which is hardly realized by flexible natural hairs. We envision that the bio-inspired SCCW could give inspiration in designing materials and devices to manipulate liquid in a more controllable way and with high efficiency.
Ant-like task allocation and recruitment in cooperative robots
NASA Astrophysics Data System (ADS)
Krieger, Michael J. B.; Billeter, Jean-Bernard; Keller, Laurent
2000-08-01
One of the greatest challenges in robotics is to create machines that are able to interact with unpredictable environments in real time. A possible solution may be to use swarms of robots behaving in a self-organized manner, similar to workers in an ant colony. Efficient mechanisms of division of labour, in particular series-parallel operation and transfer of information among group members, are key components of the tremendous ecological success of ants. Here we show that the general principles regulating division of labour in ant colonies indeed allow the design of flexible, robust and effective robotic systems. Groups of robots using ant-inspired algorithms of decentralized control techniques foraged more efficiently and maintained higher levels of group energy than single robots. But the benefits of group living decreased in larger groups, most probably because of interference during foraging. Intriguingly, a similar relationship between group size and efficiency has been documented in social insects. Moreover, when food items were clustered, groups where robots could recruit other robots in an ant-like manner were more efficient than groups without information transfer, suggesting that group dynamics of swarms of robots may follow rules similar to those governing social insects.
Modulation aware cluster size optimisation in wireless sensor networks
NASA Astrophysics Data System (ADS)
Sriram Naik, M.; Kumar, Vinay
2017-07-01
Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.
Applications of process improvement techniques to improve workflow in abdominal imaging.
Tamm, Eric Peter
2016-03-01
Major changes in the management and funding of healthcare are underway that will markedly change the way radiology studies will be reimbursed. The result will be the need to deliver radiology services in a highly efficient manner while maintaining quality. The science of process improvement provides a practical approach to improve the processes utilized in radiology. This article will address in a step-by-step manner how to implement process improvement techniques to improve workflow in abdominal imaging.
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
Navarro-Ramirez, Rodrigo; Lang, Gernot; Lian, Xiaofeng; Berlin, Connor; Janssen, Insa; Jada, Ajit; Alimi, Marjan; Härtl, Roger
2017-04-01
Portable intraoperative computed tomography (iCT) with integrated 3-dimensional navigation (NAV) offers new opportunities for more precise navigation in spinal surgery, eliminates radiation exposure for the surgical team, and accelerates surgical workflows. We present the concept of "total navigation" using iCT NAV in spinal surgery. Therefore, we propose a step-by-step guideline demonstrating how total navigation can eliminate fluoroscopy with time-efficient workflows integrating iCT NAV into daily practice. A prospective study was conducted on collected data from patients undergoing iCT NAV-guided spine surgery. Number of scans, radiation exposure, and workflow of iCT NAV (e.g., instrumentation, cage placement, localization) were documented. Finally, the accuracy of pedicle screws and time for instrumentation were determined. iCT NAV was successfully performed in 117 cases for various indications and in all regions of the spine. More than half (61%) of cases were performed in a minimally invasive manner. Navigation was used for skin incision, localization of index level, and verification of implant position. iCT NAV was used to evaluate neural decompression achieved in spinal fusion surgeries. Total navigation eliminates fluoroscopy in 75%, thus reducing staff radiation exposure entirely. The average times for iCT NAV setup and pedicle screw insertion were 12.1 and 3.1 minutes, respectively, achieving a pedicle screw accuracy of 99%. Total navigation makes spine surgery safer and more accurate, and it enhances efficient and reproducible workflows. Fluoroscopy and radiation exposure for the surgical staff can be eliminated in the majority of cases. Copyright © 2017 Elsevier Inc. All rights reserved.
Direct AUC optimization of regulatory motifs.
Zhu, Lin; Zhang, Hong-Bo; Huang, De-Shuang
2017-07-15
The discovery of transcription factor binding site (TFBS) motifs is essential for untangling the complex mechanism of genetic variation under different developmental and environmental conditions. Among the huge amount of computational approaches for de novo identification of TFBS motifs, discriminative motif learning (DML) methods have been proven to be promising for harnessing the discovery power of accumulated huge amount of high-throughput binding data. However, they have to sacrifice accuracy for speed and could fail to fully utilize the information of the input sequences. We propose a novel algorithm called CDAUC for optimizing DML-learned motifs based on the area under the receiver-operating characteristic curve (AUC) criterion, which has been widely used in the literature to evaluate the significance of extracted motifs. We show that when the considered AUC loss function is optimized in a coordinate-wise manner, the cost function of each resultant sub-problem is a piece-wise constant function, whose optimal value can be found exactly and efficiently. Further, a key step of each iteration of CDAUC can be efficiently solved as a computational geometry problem. Experimental results on real world high-throughput datasets illustrate that CDAUC outperforms competing methods for refining DML motifs, while being one order of magnitude faster. Meanwhile, preliminary results also show that CDAUC may also be useful for improving the interpretability of convolutional kernels generated by the emerging deep learning approaches for predicting TF sequences specificities. CDAUC is available at: https://drive.google.com/drive/folders/0BxOW5MtIZbJjNFpCeHlBVWJHeW8 . dshuang@tongji.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
2015-01-01
Background Cellular processes are known to be modular and are realized by groups of proteins implicated in common biological functions. Such groups of proteins are called functional modules, and many community detection methods have been devised for their discovery from protein interaction networks (PINs) data. In current agglomerative clustering approaches, vertices with just a very few neighbors are often classified as separate clusters, which does not make sense biologically. Also, a major limitation of agglomerative techniques is that their computational efficiency do not scale well to large PINs. Finally, PIN data obtained from large scale experiments generally contain many false positives, and this makes it hard for agglomerative clustering methods to find the correct clusters, since they are known to be sensitive to noisy data. Results We propose a local similarity premetric, the relative vertex clustering value, as a new criterion allowing to decide when a node can be added to a given node's cluster and which addresses the above three issues. Based on this criterion, we introduce a novel and very fast agglomerative clustering technique, FAC-PIN, for discovering functional modules and protein complexes from a PIN data. Conclusions Our proposed FAC-PIN algorithm is applied to nine PIN data from eight different species including the yeast PIN, and the identified functional modules are validated using Gene Ontology (GO) annotations from DAVID Bioinformatics Resources. Identified protein complexes are also validated using experimentally verified complexes. Computational results show that FAC-PIN can discover functional modules or protein complexes from PINs more accurately and more efficiently than HC-PIN and CNM, the current state-of-the-art approaches for clustering PINs in an agglomerative manner. PMID:25734691
A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics
NASA Technical Reports Server (NTRS)
Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan
2013-01-01
In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.
Development of the PROMIS positive emotional and sensory expectancies of smoking item banks.
Tucker, Joan S; Shadel, William G; Edelen, Maria Orlando; Stucky, Brian D; Li, Zhen; Hansen, Mark; Cai, Li
2014-09-01
The positive emotional and sensory expectancies of cigarette smoking include improved cognitive abilities, positive affective states, and pleasurable sensorimotor sensations. This paper describes development of Positive Emotional and Sensory Expectancies of Smoking item banks that will serve to standardize the assessment of this construct among daily and nondaily cigarette smokers. Data came from daily (N = 4,201) and nondaily (N =1,183) smokers who completed an online survey. To identify a unidimensional set of items, we conducted item factor analyses, item response theory analyses, and differential item functioning analyses. Additionally, we evaluated the performance of fixed-item short forms (SFs) and computer adaptive tests (CATs) to efficiently assess the construct. Eighteen items were included in the item banks (15 common across daily and nondaily smokers, 1 unique to daily, 2 unique to nondaily). The item banks are strongly unidimensional, highly reliable (reliability = 0.95 for both), and perform similarly across gender, age, and race/ethnicity groups. A SF common to daily and nondaily smokers consists of 6 items (reliability = 0.86). Results from simulated CATs indicated that, on average, less than 8 items are needed to assess the construct with adequate precision using the item banks. These analyses identified a new set of items that can assess the positive emotional and sensory expectancies of smoking in a reliable and standardized manner. Considerable efficiency in assessing this construct can be achieved by using the item bank SF, employing computer adaptive tests, or selecting subsets of items tailored to specific research or clinical purposes. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
The thermodynamic efficiency of computations made in cells across the range of life
NASA Astrophysics Data System (ADS)
Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan
2017-11-01
Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Methods, systems, and computer program products for network firewall policy optimization
Fulp, Errin W [Winston-Salem, NC; Tarsa, Stephen J [Duxbury, MA
2011-10-18
Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.
Nozzles for Focusing Aerosol Particles
2009-10-01
Fabrication of the nozzle with the desired shape was accomplished using EDM technology. First, a copper tungsten electrode was turned on a CNC lathe . The...small (0.9-mm diameter). The external portions of the nozzles were machined in a more conventional manner using computer numerical control ( CNC ... lathes and milling machines running programs written by computer aided machining (CAM) software. The close tolerance of concentricity of the two
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
Fisch, Clifford B.; Fisch, Martin L.
1979-01-01
The Stanley S. Lamm Institute for Developmental Disabilities of The Long Island College Hospital, in conjunction with Micro-Med Systems has developed a low cost micro-computer based information system (ADDOP TRS) which monitors quality of care in outpatient settings rendering services to the developmentally disabled population. The process of conversion from paper record keeping systems to direct key-to-disk data capture at the point of service delivery is described. Data elements of the information system including identifying patient information, coded and English-grammar entry procedures for tracking elements of service as well as their delivery status are described. Project evaluation criteria are defined including improved quality of care, improved productivity for clerical and professional staff and enhanced decision making capability. These criteria are achieved in a cost effective manner as a function of more efficient information flow. Administrative applications including staff/budgeting procedures, submissions for third party reimbursement and case reporting to utilization review committees are considered.
Electronic spectra from TDDFT and machine learning in chemical space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; Hartmann, Mia; Tapavicza, Enrico
Due to its favorable computational efficiency, time-dependent (TD) density functional theory (DFT) enables the prediction of electronic spectra in a high-throughput manner across chemical space. Its predictions, however, can be quite inaccurate. We resolve this issue with machine learning models trained on deviations of reference second-order approximate coupled-cluster (CC2) singles and doubles spectra from TDDFT counterparts, or even from DFT gap. We applied this approach to low-lying singlet-singlet vertical electronic spectra of over 20 000 synthetically feasible small organic molecules with up to eight CONF atoms. The prediction errors decay monotonously as a function of training set size. For amore » training set of 10 000 molecules, CC2 excitation energies can be reproduced to within +/- 0.1 eV for the remaining molecules. Analysis of our spectral database via chromophore counting suggests that even higher accuracies can be achieved. Based on the evidence collected, we discuss open challenges associated with data-driven modeling of high-lying spectra and transition intensities.« less
Choosing Between MRI and CT Imaging in the Adult with Congenital Heart Disease.
Bonnichsen, Crystal; Ammash, Naser
2016-05-01
Improvements in the outcomes of surgical and catheter-based interventions and medical therapy have led to a growing population of adult patients with congenital heart disease. Adult patients with previously undiagnosed congenital heart disease or those previously palliated or repaired may have challenging echocardiographic examinations. Understanding the distinct anatomic and hemodynamic features of the congenital anomaly and quantifying ventricular function and valvular dysfunction plays an important role in the management of these patients. Rapid advances in imaging technology with magnetic resonance imaging (MRI) and computed tomography angiography (CTA) allow for improved visualization of complex cardiac anatomy in the evaluation of this unique patient population. Although echocardiography remains the most widely used imaging tool to evaluate congenital heart disease, alternative and, at times, complimentary imaging modalities should be considered. When caring for adults with congenital heart disease, it is important to choose the proper imaging study that can answer the clinical question with the highest quality images, lowest risk to the patient, and in a cost-efficient manner.
Process-based network decomposition reveals backbone motif structure
Wang, Guanyu; Du, Chenghang; Chen, Hao; Simha, Rahul; Rong, Yongwu; Xiao, Yi; Zeng, Chen
2010-01-01
A central challenge in systems biology today is to understand the network of interactions among biomolecules and, especially, the organizing principles underlying such networks. Recent analysis of known networks has identified small motifs that occur ubiquitously, suggesting that larger networks might be constructed in the manner of electronic circuits by assembling groups of these smaller modules. Using a unique process-based approach to analyzing such networks, we show for two cell-cycle networks that each of these networks contains a giant backbone motif spanning all the network nodes that provides the main functional response. The backbone is in fact the smallest network capable of providing the desired functionality. Furthermore, the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. The process-based approach used in the above analysis has additional benefits: It is scalable, analytic (resulting in a single analyzable expression that describes the behavior), and computationally efficient (all possible minimal networks for a biological process can be identified and enumerated). PMID:20498084
NASA Technical Reports Server (NTRS)
Rickman, D.; Butler, K. A.; Laymon, C. A.
1994-01-01
The purpose of this document is to introduce Geographical Information System (GIS) terminology and summarize interviews conducted with scientists in the Earth Science and Applications Division (ESAD). There is a growing need in ESAD for GIS technology. With many different data sources available to the scientists comes the need to be able to process and view these data in an efficient manner. Since most of these data are stored in vastly different formats, specialized software and hardware are needed. Several ESAD scientists have been using a GIS, specifically the Man-computer Interactive Data Access System (MCIDAS). MCIDAS can solve many of the research problems that arise, but there are areas of research that need more powerful tools; one such example is the multispectral image analysis which is described in this document. Given the strong need for GIS in ESAD, we recommend that a requirements analysis and implementation plan be developed using this document as a basis for further investigation.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
Smart Technology in Lung Disease Clinical Trials.
Geller, Nancy L; Kim, Dong-Yun; Tian, Xin
2016-01-01
This article describes the use of smart technology by investigators and patients to facilitate lung disease clinical trials and make them less costly and more efficient. By "smart technology" we include various electronic media, such as computer databases, the Internet, and mobile devices. We first describe the use of electronic health records for identifying potential subjects and then discuss electronic informed consent. We give several examples of using the Internet and mobile technology in clinical trials. Interventions have been delivered via the World Wide Web or via mobile devices, and both have been used to collect outcome data. We discuss examples of new electronic devices that recently have been introduced to collect health data. While use of smart technology in clinical trials is an exciting development, comparison with similar interventions applied in a conventional manner is still in its infancy. We discuss advantages and disadvantages of using this omnipresent, powerful tool in clinical trials, as well as directions for future research. Published by Elsevier Inc.
Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks
NASA Astrophysics Data System (ADS)
Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin
2017-09-01
A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
NETWORK ASSISTED ANALYSIS TO REVEAL THE GENETIC BASIS OF AUTISM1
Liu, Li; Lei, Jing; Roeder, Kathryn
2016-01-01
While studies show that autism is highly heritable, the nature of the genetic basis of this disorder remains illusive. Based on the idea that highly correlated genes are functionally interrelated and more likely to affect risk, we develop a novel statistical tool to find more potentially autism risk genes by combining the genetic association scores with gene co-expression in specific brain regions and periods of development. The gene dependence network is estimated using a novel partial neighborhood selection (PNS) algorithm, where node specific properties are incorporated into network estimation for improved statistical and computational efficiency. Then we adopt a hidden Markov random field (HMRF) model to combine the estimated network and the genetic association scores in a systematic manner. The proposed modeling framework can be naturally extended to incorporate additional structural information concerning the dependence between genes. Using currently available genetic association data from whole exome sequencing studies and brain gene expression levels, the proposed algorithm successfully identified 333 genes that plausibly affect autism risk. PMID:27134692
NASA Technical Reports Server (NTRS)
1979-01-01
The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin
2014-05-01
During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560,640 equivalent cores. Scientific applications, such as CESM, are also required to demonstrate a "computational readiness capability" to efficiently scale across and utilize 20% of the entire system. The 0,25 deg configuration of the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), the atmospheric component of CESM, has been demonstrated to scale efficiently across more than 5,000 nodes (80,000 CPU cores) on Titan. The tracer transport routines of CAM-SE have also been ported to take advantage of the hybrid many-core architecture of Titan using GPUs [see EGU2014-4233], yielding over 2X speedup when transporting over 100 tracers. The high throughput I/O in CESM, based on the Parallel IO Library (PIO), is being further augmented to support even higher resolutions and enhance resiliency. The application performance of the individual runs are archived in a database and routinely analyzed to identify and rectify performance degradation during the course of the experiments. The various resources available at the OLCF now support a scientific workflow to facilitate high-resolution climate modelling. A high-speed center-wide parallel file system, called ATLAS, capable of 1 TB/s, is available on Titan as well as on the clusters used for analysis (Rhea) and visualization (Lens/EVEREST). Long-term archive is facilitated by the HPSS storage system. The Earth System Grid (ESG), featuring search & discovery, is also used to deliver data. The end-to-end workflow allows OLCF users to efficiently share data and publish results in a timely manner.
The Office of Budget and Finance (OBF) advises the NCI Office of the Director and senior NCI staff on the effective management of financial and other resources to ensure that NCI operates in an efficient and fiscally responsible manner.
Semiannual Report: Oct 1, 2009 - Mar 31, 2010
Semiannual Report #EPA-350-R-10-004, May, 2010. EPA continues to face challenges in using its funds and accomplishing its mission in an efficient and effective manner, particularly concerning Recovery Act projects.
Creating a foundation for a synergistic approach to program management
NASA Technical Reports Server (NTRS)
Knoll, Karyn T.
1992-01-01
In order to accelerate the movement of humans into space within reasonable budgetary constraints, NASA must develop an organizational structure that will allow the agency to efficiently use all the resources it has available for the development of any program the nation decides to undertake. This work considers the entire set of tasks involved in the successful development of any program. Areas that hold the greatest promise of accelerating programmatic development and/or increasing the efficiency of the use of available resources by being dealt with in a centralized manner rather than being handled by each program individually are identified. Using this information, an agency organizational structure is developed that will allow NASA to promote interprogram synergisms. In order for NASA to efficiently manage its programs in a manner that will allow programs to benefit from one another and thereby accelerate the movement of humans into space, several steps must be taken. First, NASA must develop an organizational structure that will allow potential interprogram synergisms to be identified and promoted. Key features of the organizational structure are recommended in this paper. Second, NASA must begin to develop the requirements for a program in a manner that will promote overall space program goals rather than achieving only the goals that apply to the program for which the requirements are being developed. Finally, NASA must consider organizing the agency around the functions required to support NASA's goals and objectives rather than around geographic locations.
ERIC Educational Resources Information Center
Vernooy, D. Andrew; Alter, Kevin
2001-01-01
Presents design features of the University of Texas' Applied Computational Engineering and Sciences Building and discusses how institutions can guide the character of their architecture without subverting the architects' responsibility to confront their contemporary culture in a critical manner. (GR)
Transformation of Galilean satellite parameters to J2000
NASA Astrophysics Data System (ADS)
Lieske, J. H.
1998-09-01
The so-called galsat software has the capability of computing Earth-equatorial coordinates of Jupiter's Galilean satellies in an arbitrary reference frame, not just that of B1950. The 50 parameters which define the theory of motion of the Galilean satellites (Lieske 1977, Astron. Astrophys. 56, 333--352) could also be transformed in a manner such that the same galsat computer program can be employed to compute rectangular coordinates with their values being in the J2000 system. One of the input parameters (varepsilon_ {27}) is related to the obliquity of the ecliptic and its value is normally zero in the B1950 frame. If that parameter is changed from 0 to -0.0002771, and if other input parameters are changed in a prescribed manner, then the same galsat software can be employed to produce ephemerides on the J2000 system for any of the ephemerides which employ the galsat parameters, such as those of Arlot (1982), Vasundhara (1994) and Lieske. In this paper we present the parameters whose values must be altered in order for the software to produce coordinates directly in the J2000 system.
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E
2018-06-12
We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).
Verdière, Kevin J.; Roy, Raphaëlle N.; Dehais, Frédéric
2018-01-01
Monitoring pilot's mental states is a relevant approach to mitigate human error and enhance human machine interaction. A promising brain imaging technique to perform such a continuous measure of human mental state under ecological settings is Functional Near-InfraRed Spectroscopy (fNIRS). However, to our knowledge no study has yet assessed the potential of fNIRS connectivity metrics as long as passive Brain Computer Interfaces (BCI) are concerned. Therefore, we designed an experimental scenario in a realistic simulator in which 12 pilots had to perform landings under two contrasted levels of engagement (manual vs. automated). The collected data were used to benchmark the performance of classical oxygenation features (i.e., Average, Peak, Variance, Skewness, Kurtosis, Area Under the Curve, and Slope) and connectivity features (i.e., Covariance, Pearson's, and Spearman's Correlation, Spectral Coherence, and Wavelet Coherence) to discriminate these two landing conditions. Classification performance was obtained by using a shrinkage Linear Discriminant Analysis (sLDA) and a stratified cross validation using each feature alone or by combining them. Our findings disclosed that the connectivity features performed significantly better than the classical concentration metrics with a higher accuracy for the wavelet coherence (average: 65.3/59.9 %, min: 45.3/45.0, max: 80.5/74.7 computed for HbO/HbR signals respectively). A maximum classification performance was obtained by combining the area under the curve with the wavelet coherence (average: 66.9/61.6 %, min: 57.3/44.8, max: 80.0/81.3 computed for HbO/HbR signals respectively). In a general manner all connectivity measures allowed an efficient classification when computed over HbO signals. Those promising results provide methodological cues for further implementation of fNIRS-based passive BCIs. PMID:29422841
Switching and Rectification in Carbon-Nanotube Junctions
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Andriotis, Antonis N.; Menon, Madhu; Chernozatonskii, Leonid
2003-01-01
Multi-terminal carbon-nanotube junctions are under investigation as candidate components of nanoscale electronic devices and circuits. Three-terminal "Y" junctions of carbon nanotubes (see Figure 1) have proven to be especially interesting because (1) it is now possible to synthesize them in high yield in a controlled manner and (2) results of preliminary experimental and theoretical studies suggest that such junctions could exhibit switching and rectification properties. Following the preliminary studies, current-versus-voltage characteristics of a number of different "Y" junctions of single-wall carbon nanotubes connected to metal wires were computed. Both semiconducting and metallic nanotubes of various chiralities were considered. Most of the junctions considered were symmetric. These computations involved modeling of the quantum electrical conductivity of the carbon nanotubes and junctions, taking account of such complicating factors as the topological defects (pentagons, heptagons, and octagons) present in the hexagonal molecular structures at the junctions, and the effects of the nanotube/wire interfaces. A major component of the computational approach was the use of an efficient Green s function embedding scheme. The results of these computations showed that symmetric junctions could be expected to support both rectification and switching. The results also showed that rectification and switching properties of a junction could be expected to depend strongly on its symmetry and, to a lesser degree, on the chirality of the nanotubes. In particular, it was found that a zigzag nanotube branching at a symmetric "Y" junction could exhibit either perfect rectification or partial rectification (asymmetric current-versus-voltage characteristic, as in the example of Figure 2). It was also found that an asymmetric "Y" junction would not exhibit rectification.
CAESY - COMPUTER AIDED ENGINEERING SYSTEM
NASA Technical Reports Server (NTRS)
Wette, M. R.
1994-01-01
Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.
Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels
NASA Astrophysics Data System (ADS)
Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.
2015-12-01
Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.; ...
2017-11-23
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Energy Efficiency Building Code for Commercial Buildings in Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, John; Greenberg, Steve; Rubinstein, Francis
2000-09-30
1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.
ERIC Educational Resources Information Center
Moccero, D.
2008-01-01
The Chilean authorities plan to raise budgetary allocations over the medium term for a variety of social programmes, including education, health care and housing. This incremental spending will need to be carried out in a cost-efficient manner to make sure that it yields commensurate improvements in social outcomes. Chile's health indicators show…
Adalsteinsson, David; McMillen, David; Elston, Timothy C
2004-03-08
Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Novel schemes for measurement-based quantum computation.
Gross, D; Eisert, J
2007-06-01
We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.
WEDDS: The WITS Encrypted Data Delivery System
NASA Technical Reports Server (NTRS)
Norris, J.; Backes, P.
1999-01-01
WEDDS, the WITS Encrypted Data Delivery System, is a framework for supporting distributed mission operations by automatically transferring sensitive mission data in a secure and efficient manner to and from remote mission participants over the internet.
Code of Federal Regulations, 2010 CFR
2010-07-01
... overall energy efficient and economical manner; (b) Maintain temperatures to maximize customer satisfaction by conforming to local commercial equivalent temperature levels and operating practices; (c) Set...
Creating a provider network: fact, fantasy, and future.
Meeks, J S
1997-09-01
Integrated delivery systems should consider multiple options through which to affiliate, with primary care physicians and advanced practice nurses. Caution should be employed to assure that system alignment occurs in an efficient, effective manner.
Computationally-Predicted AOPs and Systems Toxicology
The Adverse Outcome Pathway has emerged as an internationally harmonized mechanism for organizing biological information in a chemical agnostic manner. This construct is valuable for interpreting the results from high-throughput toxicity (HTT) assessment by providing a mechanisti...
Lumped Parameter Model (LPM) for Light-Duty Vehicles
EPA’s Lumped Parameter Model (LPM) is a free, desktop computer application that estimates the effectiveness (CO2 Reduction) of various technology combinations or “packages,” in a manner that accounts for synergies between technologies.
Analysis of Compton continuum measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gold, R.; Olson, I. K.
1970-01-01
Five computer programs: COMPSCAT, FEND, GABCO, DOSE, and COMPLOT, have been developed and used for the analysis and subsequent reduction of measured energy distributions of Compton recoil electrons to continuous gamma spectra. In addition to detailed descriptions of these computer programs, the relationship amongst these codes is stressed. The manner in which these programs function is illustrated by tracing a sample measurement through a complete cycle of the data-reduction process.
Asymptotic Normality of Poly-T Densities with Bayesian Applications.
1987-10-01
be extended to the case of many t-like factors in a straightforward manner. Obviously, the computational complexity will increase rapidly as the number...York: Marcel-Dekker. Broemeling, L.D. and Abdullah, M.Y. (1984). An approximation to the poly-t distribution. Communciations in Statistics A,11, 1407...Street Center Champaign, IL 61820 Austin, TX 78703 Dr. Steven Hunks Dr. James Krantz Department of Education Computer -based Education University of
Geospatial Representation, Analysis and Computing Using Bandlimited Functions
2010-02-19
navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed
Efficient Computational Prototyping of Mixed Technology Microfluidic Components and Systems
2002-08-01
AFRL-IF-RS-TR-2002-190 Final Technical Report August 2002 EFFICIENT COMPUTATIONAL PROTOTYPING OF MIXED TECHNOLOGY MICROFLUIDIC...SUBTITLE EFFICIENT COMPUTATIONAL PROTOTYPING OF MIXED TECHNOLOGY MICROFLUIDIC COMPONENTS AND SYSTEMS 6. AUTHOR(S) Narayan R. Aluru, Jacob White...Aided Design (CAD) tools for microfluidic components and systems were developed in this effort. Innovative numerical methods and algorithms for mixed
Chapnick, Douglas A.; Jacobsen, Jeremy; Liu, Xuedong
2013-01-01
Understanding how cells migrate individually and collectively during development and cancer metastasis can be significantly aided by a computation tool to accurately measure not only cellular migration speed, but also migration direction and changes in migration direction in a temporal and spatial manner. We have developed such a tool for cell migration researchers, named Pathfinder, which is capable of simultaneously measuring the migration speed, migration direction, and changes in migration directions of thousands of cells both instantaneously and over long periods of time from fluorescence microscopy data. Additionally, we demonstrate how the Pathfinder software can be used to quantify collective cell migration. The novel capability of the Pathfinder software to measure the changes in migration direction of large populations of cells in a spatiotemporal manner will aid cellular migration research by providing a robust method for determining the mechanisms of cellular guidance during individual and collective cell migration. PMID:24386097
A brief perspective on computational electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nachman, A.
1996-06-01
There is a growing interest in many quarters in acquiring the ability to predict all manner of electromagnetic (EM) effects. These effects include radar scattering attributes of objects (airplanes, missles, tanks, ships, etc.); the mutal interference of a multitude of antennas on board a single aircraft or ship; the performance of integrated circuits (IC); the propagation of waves (radio and radar) over long distances with the help of hindrance of complicated tomography and ionospheric/atmospheric ducting; and the propagation of pulses through dispersive media (soil, treetops, or concrete) to detect pollutants or hidden targets, or to assess the health of runways.more » All of the above require extensive computation and, despite the fact that Maxwell`s equations are linear in all these cases, codes do not exist which will do the job in a timely and error-controlled manner. This report briefly discusses how this can be rectified. 16 refs.« less
Control of joint motion simulators for biomechanical research
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1992-01-01
The authors present a hierarchical adaptive algorithm for controlling upper extremity human joint motion simulators. A joint motion simulator is a computer-controlled, electromechanical system which permits the application of forces to the tendons of a human cadaver specimen in such a way that the cadaver joint under study achieves a desired motion in a physiologic manner. The proposed control scheme does not require knowledge of the cadaver specimen dynamic model, and solves on-line the indeterminate problem which arises because human joints typically possess more actuators than degrees of freedom. Computer simulation results are given for an elbow/forearm system and wrist/hand system under hierarchical control. The results demonstrate that any desired normal joint motion can be accurately tracked with the proposed algorithm. These simulation results indicate that the controller resolved the indeterminate problem redundancy in a physiologic manner, and show that the control scheme was robust to parameter uncertainty and to sensor noise.
Highly Efficient Vector-Inversion Pulse Generators
NASA Technical Reports Server (NTRS)
Rose, Franklin
2004-01-01
Improved transmission-line pulse generators of the vector-inversion type are being developed as lightweight sources of pulsed high voltage for diverse applications, including spacecraft thrusters, portable x-ray imaging systems, impulse radar systems, and corona-discharge systems for sterilizing gases. In this development, more than the customary attention is paid to principles of operation and details of construction so as to the maximize the efficiency of the pulse-generation process while minimizing the sizes of components. An important element of this approach is segmenting a pulse generator in such a manner that the electric field in each segment is always below the threshold for electrical breakdown. One design of particular interest, a complete description of which was not available at the time of writing this article, involves two parallel-plate transmission lines that are wound on a mandrel, share a common conductor, and are switched in such a manner that the pulse generator is divided into a "fast" and a "slow" section. A major innovation in this design is the addition of ferrite to the "slow" section to reduce the size of the mandrel needed for a given efficiency.
Computer program CDCID: an automated quality control program using CDC update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, G.L.; Aguilar, F.
1984-04-01
A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.
Computers in health care for the 21st century.
O'Desky, R I; Ball, M J; Ball, E E
1990-03-01
As the world enters the last decade of the 20th Century, there is a great deal of speculation about the effect of computers on the future delivery of health care. In this article, the authors attempt to identify some of the evolving computer technologies and anticipate what effect they will have by the year 2000. Rather than listing potential accomplishments, each of the affected areas: hardware, software, health care systems and communications, are presented in an evolutionary manner so the reader can better appreciate where we have been and where we are going.
Blind topological measurement-based quantum computation.
Morimae, Tomoyuki; Fujii, Keisuke
2012-01-01
Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3 × 10(-3), which is comparable to that (7.5 × 10(-3)) of non-blind topological quantum computation. As the error per gate of the order 10(-3) was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.
Blind topological measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke
2012-09-01
Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3×10-3, which is comparable to that (7.5×10-3) of non-blind topological quantum computation. As the error per gate of the order 10-3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.