ERIC Educational Resources Information Center
Larsson, Ken
2014-01-01
This paper looks at the process of managing large numbers of exams efficiently and secure with the use of a dedicated IT support. The system integrates regulations on different levels, from national to local, (even down to departments) and ensures that the rules are employed in all stages of handling the exams. The system has a proven record of…
NASA Astrophysics Data System (ADS)
Seha, S.; Zamberi, J.; Fairu, A. J.
2017-10-01
Material handling system (MHS) is an important part for the productivity plant and has recognized as an integral part of today’s manufacturing system. Currently, MHS has growth tremendously with its technology and equipment type. Based on the case study observation, the issue involving material handling system contribute to the reduction of production efficiency. This paper aims to propose a new design of integration between material handling and manufacturing layout by investigating the influences of layout and material handling system. A method approach tool using Delmia Quest software is introduced and the simulation result is used to assess the influences of the integration between material handling system and manufacturing layout in the performance of automotive assembly line. The result show, the production of assembly line output increases more than 31% from the current system. The source throughput rate average value went up to 252 units per working hour in model 3 and show the effectiveness of the pick-to-light system as efficient storage equipment. Thus, overall result shows, the application of AGV and the pick-to-light system gave a large significant effect in the automotive assembly line. Moreover, the change of layout also shows a large significant improvement to the performance.
Uvf - Unified Volume Format: A General System for Efficient Handling of Large Volumetric Datasets.
Krüger, Jens; Potter, Kristin; Macleod, Rob S; Johnson, Christopher
2008-01-01
With the continual increase in computing power, volumetric datasets with sizes ranging from only a few megabytes to petascale are generated thousands of times per day. Such data may come from an ordinary source such as simple everyday medical imaging procedures, while larger datasets may be generated from cluster-based scientific simulations or measurements of large scale experiments. In computer science an incredible amount of work worldwide is put into the efficient visualization of these datasets. As researchers in the field of scientific visualization, we often have to face the task of handling very large data from various sources. This data usually comes in many different data formats. In medical imaging, the DICOM standard is well established, however, most research labs use their own data formats to store and process data. To simplify the task of reading the many different formats used with all of the different visualization programs, we present a system for the efficient handling of many types of large scientific datasets (see Figure 1 for just a few examples). While primarily targeted at structured volumetric data, UVF can store just about any type of structured and unstructured data. The system is composed of a file format specification with a reference implementation of a reader. It is not only a common, easy to implement format but also allows for efficient rendering of most datasets without the need to convert the data in memory.
NASA Astrophysics Data System (ADS)
Fortmann, C. M.; Farley, M. V.; Smoot, M. A.; Fieselmann, B. F.
1988-07-01
Solarex is one of the leaders in amorphous silicon based photovoltaic production and research. The large scale production environment presents unique safety concerns related to the quantity of dangerous materials as well as the number of personnel handling these materials. The safety measures explored by this work include gas detection systems, training, and failure resistant gas handling systems. Our experiences with flow restricting orifices in the CGA connections and the use of steel cylinders is reviewed. The hazards and efficiency of wet scrubbers for silane exhausts are examined. We have found it to be useful to provide the scrubbler with temperature alarms.
Diazo compounds in continuous-flow technology.
Müller, Simon T R; Wirth, Thomas
2015-01-01
Diazo compounds are very versatile reagents in organic chemistry and meet the challenge of selective assembly of structurally complex molecules. Their leaving group is dinitrogen; therefore, they are very clean and atom-efficient reagents. However, diazo compounds are potentially explosive and extremely difficult to handle on an industrial scale. In this review, it is discussed how continuous flow technology can help to make these powerful reagents accessible on large scale. Microstructured devices can improve heat transfer greatly and help with the handling of dangerous reagents safely. The in situ formation and subsequent consumption of diazo compounds are discussed along with advances in handling diazomethane and ethyl diazoacetate. The potential large-scale applications of a given methodology is emphasized. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Perspectives in astrophysical databases
NASA Astrophysics Data System (ADS)
Frailis, Marco; de Angelis, Alessandro; Roberto, Vito
2004-07-01
Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.
Efficient Power Network Analysis with Modeling of Inductive Effects
NASA Astrophysics Data System (ADS)
Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan
In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
Multiresource inventories incorporating GIS, GPS, and database management systems
Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du
2000-01-01
Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...
Rao, Vatturi Venkata Satya Prabhakar; Manthri, Ranadheer; Hemalatha, Pottumuthu; Kumar, Vuyyuru Navin; Azhar, Mohammad
2016-01-01
Hot lab dispensing of large doses of 18 fluorine fluorodeoxyglucose in master vials supplied from the cyclotrons requires high degrees of skill to handle high doses. Presently practiced conventional method of fractionating from the inverted tiltable vial pig mounted on a metal frame has its own limitations such as increasing isotope handling times and exposure to the technologist. Innovative technique devised markedly improves the fractionating efficiency along with speed, precision, and reduced dose exposure. PMID:27095872
Medical-Information-Management System
NASA Technical Reports Server (NTRS)
Alterescu, Sidney; Friedman, Carl A.; Frankowski, James W.
1989-01-01
Medical Information Management System (MIMS) computer program interactive, general-purpose software system for storage and retrieval of information. Offers immediate assistance where manipulation of large data bases required. User quickly and efficiently extracts, displays, and analyzes data. Used in management of medical data and handling all aspects of data related to care of patients. Other applications include management of data on occupational safety in public and private sectors, handling judicial information, systemizing purchasing and procurement systems, and analyses of cost structures of organizations. Written in Microsoft FORTRAN 77.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
The long-term performance of electrically charged filters in a ventilation system.
Raynor, Peter C; Chae, Soo Jae
2004-07-01
The efficiency and pressure drop of filters made from polyolefin fibers carrying electrical charges were compared with efficiency and pressure drop for filters made from uncharged glass fibers to determine if the efficiency of the charged filters changed with use. Thirty glass fiber filters and 30 polyolefin fiber filters were placed in different, but nearly identical, air-handling units that supplied outside air to a large building. Using two kinds of real-time aerosol counting and sizing instruments, the efficiency of both sets of filters was measured repeatedly for more than 19 weeks while the air-handling units operated almost continuously. Pressure drop was recorded by the ventilation system's computer control. Measurements showed that the efficiency of the glass fiber filters remained almost constant with time. However, the charged polyolefin fiber filters exhibited large efficiency reductions with time before the efficiency began to increase again toward the end of the test. For particles 0.6 microm in diameter, the efficiency of the polyolefin fiber filters declined from 85% to 45% after 11 weeks before recovering to 65% at the end of the test. The pressure drops of the glass fiber filters increased by about 0.40 in. H2O, whereas the pressure drop of the polyolefin fiber filters increased by only 0.28 in. H2O. The results indicate that dust loading reduces the effectiveness of electrical charges on filter fibers. Copyright 2004 JOEH, LLC
Multicategory Composite Least Squares Classifiers
Park, Seo Young; Liu, Yufeng; Liu, Dacheng; Scholl, Paul
2010-01-01
Classification is a very useful statistical tool for information extraction. In particular, multicategory classification is commonly seen in various applications. Although binary classification problems are heavily studied, extensions to the multicategory case are much less so. In view of the increased complexity and volume of modern statistical problems, it is desirable to have multicategory classifiers that are able to handle problems with high dimensions and with a large number of classes. Moreover, it is necessary to have sound theoretical properties for the multicategory classifiers. In the literature, there exist several different versions of simultaneous multicategory Support Vector Machines (SVMs). However, the computation of the SVM can be difficult for large scale problems, especially for problems with large number of classes. Furthermore, the SVM cannot produce class probability estimation directly. In this article, we propose a novel efficient multicategory composite least squares classifier (CLS classifier), which utilizes a new composite squared loss function. The proposed CLS classifier has several important merits: efficient computation for problems with large number of classes, asymptotic consistency, ability to handle high dimensional data, and simple conditional class probability estimation. Our simulated and real examples demonstrate competitive performance of the proposed approach. PMID:21218128
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce
NASA Astrophysics Data System (ADS)
Farhan Husain, Mohammad; Doshi, Pankil; Khan, Latifur; Thuraisingham, Bhavani
Handling huge amount of data scalably is a matter of concern for a long time. Same is true for semantic web data. Current semantic web frameworks lack this ability. In this paper, we describe a framework that we built using Hadoop to store and retrieve large number of RDF triples. We describe our schema to store RDF data in Hadoop Distribute File System. We also present our algorithms to answer a SPARQL query. We make use of Hadoop's MapReduce framework to actually answer the queries. Our results reveal that we can store huge amount of semantic web data in Hadoop clusters built mostly by cheap commodity class hardware and still can answer queries fast enough. We conclude that ours is a scalable framework, able to handle large amount of RDF data efficiently.
APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data
NASA Astrophysics Data System (ADS)
Sánchez, E.; Castro, J.; Silva, J.; Hernández, J.; Reyes, M.; Hernández, B.; Alvarez, F.; García T.
2018-04-01
APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.
Pilot line report: Development of a high efficiency thin silicon solar cell
NASA Technical Reports Server (NTRS)
1978-01-01
Experimental technology advances were implemented to increase the conversion efficiency of ultrathin 2cm x 2cm cells, to demonstrate a capability for fabricating such cells at a rate of 10,000 per month, and to fabricate 200 large-area ultrathin cells to determine their feasibility of manufacture. A production rate of 10,000 50 micron m cells per month with lot average AM0 efficiencies of 11.5% was demonstrated, with peak efficiencies of 13.5% obtained. Losses in most stages of the processing were minimized, the remaining exceptions being in the photolithography and metallization steps for front contact generation and breakage handling. The 5cm x 5cm cells were fabricated with a peak yield in excess of 40% for over 10% AM0 efficiency. Greater fabrication volume is needed to fully evaluate the expected yield and efficiency levels for large cells.
Low-Capital Systems for Thinning Pine Plantations
John Wilhoit; Qingyue Ling; Robert Rummer
1999-01-01
Highly mechanized systems utilizing rubber-tired skidders, feller-bunchers, and knuckleboom loaders are the predominant type of timber harvesting operation in the southern United States. These systems, which handle the wood in tree-length form, are highly productive and very efficient, especially for large tracts of timber. Thinnings constitute an increasing proportion...
NASA Astrophysics Data System (ADS)
Chen, Siyao; Zhang, Jun; Bai, Zhen
2017-10-01
A 57GHz overmoded relativistic backward wave oscillator (RBWO) operating on the quasi-TEM mode with pure TM01 mode output is presented in this paper, by using outer trapezoidal slow wave structure (SWS) with large distance between inner and outer conductors. The large overmoded ratio can be obtained in coaxial devices to improve power handling capacity, while the large distance between inner and outer conductors can guarantee the electron beam transmit effectively. The 8π/9 mode of quasi-TEM synchronously interacts with the electron beam, while the TM01 mode diffracted by the quasi-TEM mode outputs. The existence of TM01 6π/9 mode in SWS can extract energy from the quasi-TEM mode (which has a high value of Qe) thus increasing the power handling capacity. Particle-in-cell simulation shows that generation with high power 560 MW and efficiency 43.5% is obtained under the diode voltage 520 kV and current 2.47 kA. And the microwave has the pure frequency spectrum of 56.8 GHz radiates in the pure TM01 mode (about 98%).
Linder, Stig
2015-12-15
Scientific misconduct constitutes a severe threat to research. Procedures to handle misconduct must therefore be both efficient and precise. In Sweden, suspected cases of misconduct are handled by the universities themselves. Investigations are generally performed by appointed scientists, leading to unnecessary discussions of the validity of the conclusions made. Sweden has a Central Ethical Review Board but this is infrequently used by the universities. It is an absolute requirement for a university to withdraw incorrect publications from the literature but regulations in this area are lacking in Sweden. The extraordinarily strong legal status of graduate students at Swedish universities leads to slow and costly investigations. Even when found to be guilty of misconduct, students are allowed to defend their PhD theses. In conclusion, there is a large potential for improvement of the regulations and routines for handling scientific misconduct in Sweden.
Ljungquist, Bengt; Petersson, Per; Johansson, Anders J; Schouenborg, Jens; Garwicz, Martin
2018-04-01
Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (< 25 ms), feedback based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.
Jeong, Heon-Ho; Lee, Byungjin; Jin, Si Hyung; Jeong, Seong-Geun; Lee, Chang-Soo
2016-04-26
Droplet-based microfluidics enabling exquisite liquid-handling has been developed for diagnosis, drug discovery and quantitative biology. Compartmentalization of samples into a large number of tiny droplets is a great approach to perform multiplex assays and to improve reliability and accuracy using a limited volume of samples. Despite significant advances in microfluidic technology, individual droplet handling in pico-volume resolution is still a challenge in obtaining more efficient and varying multiplex assays. We present a highly addressable static droplet array (SDA) enabling individual digital manipulation of a single droplet using a microvalve system. In a conventional single-layer microvalve system, the number of microvalves required is dictated by the number of operation objects; thus, individual trap-and-release on a large-scale 2D array format is highly challenging. By integrating double-layer microvalves, we achieve a "balloon" valve that preserves the pressure-on state under released pressure; this valve can allow the selective releasing and trapping of 7200 multiplexed pico-droplets using only 1 μL of sample without volume loss. This selectivity and addressability completely arranged only single-cell encapsulated droplets from a mixture of droplet compositions via repetitive selective trapping and releasing. Thus, it will be useful for efficient handling of miniscule volumes of rare or clinical samples in multiplex or combinatory assays, and the selective collection of samples.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
An Efficient Algorithm for TUCKALS3 on Data with Large Numbers of Observation Units.
ERIC Educational Resources Information Center
Kiers, Henk A. L.; And Others
1992-01-01
A modification of the TUCKALS3 algorithm is proposed that handles three-way arrays of order I x J x K for any I. The reduced work space needed for storing data and increased execution speed make the modified algorithm very suitable for use on personal computers. (SLD)
Advanced techniques for the storage and use of very large, heterogeneous spatial databases
NASA Technical Reports Server (NTRS)
Peuquet, Donna J.
1987-01-01
Progress is reported in the development of a prototype knowledge-based geographic information system. The overall purpose of this project is to investigate and demonstrate the use of advanced methods in order to greatly improve the capabilities of geographic information system technology in the handling of large, multi-source collections of spatial data in an efficient manner, and to make these collections of data more accessible and usable for the Earth scientist.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
Lichen elements as pollution indicators: evaluation of methods for large monitoring programmes
Susan Will-Wolf; Sarah Jovan; Michael C. Amacher
2017-01-01
Lichen element content is a reliable indicator for relative air pollution load in research and monitoring programmes requiring both efficiency and representation of many sites. We tested the value of costly rigorous field and handling protocols for sample element analysis using five lichen species. No relaxation of rigour was supported; four relaxed protocols generated...
Review of the harvesting and extraction of advanced biofuels and bioproducts
Babette L. Marrone; Ronald E. Lacey; Daniel B. Anderson; James Bonner; Jim Coons; Taraka Dale; Cara Meghan Downes; Sandun Fernando; Christopher Fuller; Brian Goodall; Johnathan E. Holladay; Kiran Kadam; Daniel Kalb; Wei Liu; John B. Mott; Zivko Nikolov; Kimberly L. Ogden; Richard T. Sayre; Brian G. Trewyn; José A. Olivares
2017-01-01
Energy-efficient and scalable harvesting and lipid extraction processes must be developed in order for the algal biofuels and bioproducts industry to thrive. The major challenge for harvesting is the handling of large volumes of cultivation water to concentrate low amounts of biomass. For lipid extraction, the major energy and cost drivers are associated with...
Stirred suspension bioreactors as a novel method to enrich germ cells from pre-pubertal pig testis.
Dores, C; Rancourt, D; Dobrinski, I
2015-05-01
To study spermatogonial stem cells the heterogeneous testicular cell population first needs to be enriched for undifferentiated spermatogonia, which contain the stem cell population. When working with non-rodent models, this step requires working with large numbers of cells. Available cell separation methods rely on differential properties of testicular cell types such as expression of specific cell surface proteins, size, density, or differential adhesion to substrates to separate germ cells from somatic cells. The objective of this study was to develop an approach that allowed germ cell enrichment while providing efficiency of handling large cell numbers. Here, we report the use of stirred suspension bioreactors (SSB) to exploit the adhesion properties of Sertoli cells to enrich cells obtained from pre-pubertal porcine testes for undifferentiated spermatogonia. We also compared the bioreactor approach with an established differential plating method and the combination of both: SSB followed by differential plating. After 66 h of culture, germ cell enrichment in SSBs provided 7.3 ± 1.0-fold (n = 9), differential plating 9.8 ± 2.4-fold (n = 6) and combination of both methods resulted in 9.1 ± 0.3-fold enrichment of germ cells from the initial germ cell population (n = 3). To document functionality of cells recovered from the bioreactor, we demonstrated that cells retained their functional ability to reassemble seminiferous tubules de novo after grafting to mouse hosts and to support spermatogenesis. These results demonstrate that the SSB allows enrichment of germ cells in a controlled and scalable environment providing an efficient method when handling large cell numbers while reducing variability owing to handling. © 2015 American Society of Andrology and European Academy of Andrology.
New Insights into Handling Missing Values in Environmental Epidemiological Studies
Roda, Célina; Nicolis, Ioannis; Momas, Isabelle; Guihenneuc, Chantal
2014-01-01
Missing data are unavoidable in environmental epidemiologic surveys. The aim of this study was to compare methods for handling large amounts of missing values: omission of missing values, single and multiple imputations (through linear regression or partial least squares regression), and a fully Bayesian approach. These methods were applied to the PARIS birth cohort, where indoor domestic pollutant measurements were performed in a random sample of babies' dwellings. A simulation study was conducted to assess performances of different approaches with a high proportion of missing values (from 50% to 95%). Different simulation scenarios were carried out, controlling the true value of the association (odds ratio of 1.0, 1.2, and 1.4), and varying the health outcome prevalence. When a large amount of data is missing, omitting these missing data reduced statistical power and inflated standard errors, which affected the significance of the association. Single imputation underestimated the variability, and considerably increased risk of type I error. All approaches were conservative, except the Bayesian joint model. In the case of a common health outcome, the fully Bayesian approach is the most efficient approach (low root mean square error, reasonable type I error, and high statistical power). Nevertheless for a less prevalent event, the type I error is increased and the statistical power is reduced. The estimated posterior distribution of the OR is useful to refine the conclusion. Among the methods handling missing values, no approach is absolutely the best but when usual approaches (e.g. single imputation) are not sufficient, joint modelling approach of missing process and health association is more efficient when large amounts of data are missing. PMID:25226278
Stirred suspension bioreactors as a novel method to enrich germ cells from pre-pubertal pig testis
Dores, Camila; Rancourt, Derrick; Dobrinski, Ina
2015-01-01
To study spermatogonial stem cells the heterogeneous testicular cell population first needs to be enriched for undifferentiated spermatogonia, which contain the stem cell population. When working with non-rodent models, this step requires working with large numbers of cells. Available cell separation methods rely on differential properties of testicular cell types such as expression of specific cell surface proteins, size, density or differential adhesion to substrates to separate germ cells from somatic cells. The objective of this study was to develop an approach that allowed germ cell enrichment while providing efficiency of handling large cell numbers. Here we report the use of stirred suspension bioreactors to exploit the adhesion properties of Sertoli cells to enrich cells obtained from pre-pubertal porcine testes for undifferentiated spermatogonia. We also compared the bioreactor approach with an established differential plating method and the combination of both: stirred suspension bioreactor followed by differential plating. After 66 hours of culture, germ cell enrichment in stirred suspension bioreactors provided 7.3±1.0 fold (n=9), differential plating 9.8±2.4 fold (n=6) and combination of both methods resulted in 9.1±0.3 fold enrichment of germ cells from the initial germ cell population (n=3). To document functionality of cells recovered from the bioreactor, we demonstrated that cells retained their functional ability to reassemble seminiferous tubules de novo after grafting to mouse hosts and to support spermatogenesis. These results demonstrate that the stirred suspension bioreactor allows enrichment of germ cells in a controlled and scalable environment providing an efficient method when handling large cell numbers while reducing variability due to handling. PMID:25877677
Research on the precise positioning of customers in large data environment
NASA Astrophysics Data System (ADS)
Zhou, Xu; He, Lili
2018-04-01
Customer positioning has always been a problem that enterprises focus on. In this paper, FCM clustering algorithm is used to cluster customer groups. However, due to the traditional FCM clustering algorithm, which is susceptible to the influence of the initial clustering center and easy to fall into the local optimal problem, the short board of FCM is solved by the gray optimization algorithm (GWO) to achieve efficient and accurate handling of a large number of retailer data.
A Human Systems Integration Approach to Energy Efficiency in Ground Transportation
2015-12-01
Granite Construction Organizational Structure .........................................53 Figure 7. A Comparison of USMC Structure to Granite Construction...Caterpillar Corporation and the implementation and use of their telematics systems within a company called Granite Construction. Granite Construction...profit over 250 million dollars annually. In addition, similar to the USMC, Granite Construction handles both large and small scale projects in a
Proposal for a Web Encoding Service (wes) for Spatial Data Transactio
NASA Astrophysics Data System (ADS)
Siew, C. B.; Peters, S.; Rahman, A. A.
2015-10-01
Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.
PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems
NASA Technical Reports Server (NTRS)
Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.
1995-01-01
PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.
Wipe-rinse technique for quantitating microbial contamination on large surfaces.
Kirschner, L E; Puleo, J R
1979-01-01
The evaluation of an improved wipe-rinse technique for the bioassay of large areas was undertaken due to inherent inadequacies in the cotton swab-rinse technique to which assay of spacecraft is currently restricted. Four types of contamination control cloths were initially tested. A polyester-bonded cloth (PBC) was selected for further evaluation because of its superior efficiency and handling characteristics. Results from comparative tests with PBC and cotton swabs on simulated spacecraft surfaces indicated a significantly higher recovery efficiency for the PBC than for the cotton (90.4 versus 75.2%). Of the sampling areas sites studied, PBC was found to be most effective on surface areas not exceeding 0.74 m2 (8.0 feet 2). PMID:394682
Wipe-rinse technique for quantitating microbial contamination on large surfaces
NASA Technical Reports Server (NTRS)
Kirschner, L. E.; Puleo, J. R.
1979-01-01
The evaluation of an improved wipe-rinse technique for the bioassay of large areas was undertaken due to inherent inadequacies in the cotton swab-rinse technique to which assay of spacecraft is currently restricted. Four types of contamination control cloths were initially tested. A polyester-bonded cloth (PBC) was selected for further evaluation because of its superior efficiency and handling characteristics. Results from comparative tests with PBC and cotton swabs on simulated spacecraft surfaces indicated a significantly higher recovery efficiency for the PBC than for the cotton (90.4 versus 75.2%). Of the sampling area sites studied, PBC was found to be most effective on surface areas not exceeding 0.74 sq m (8.0 sq ft).
NASA Technical Reports Server (NTRS)
Youngquist, Robert; Starr, Stanley; Krenn, Angela; Captain, Janine; Williams, Martha
2016-01-01
The National Aeronautics and Space Administration (NASA) is a major user of liquid hydrogen. In particular, NASA's John F. Kennedy (KSC) Space Center has operated facilities for handling and storing very large quantities of liquid hydrogen (LH2) since the early 1960s. Safe operations pose unique challenges and as a result NASA has invested in technology development to improve operational efficiency and safety. This paper reviews recent innovations including methods of leak and fire detection and aspects of large storage tank health and integrity. We also discuss the use of liquid hydrogen in space and issues we are addressing to ensure safe and efficient operations should hydrogen be used as a propellant derived from in-situ volatiles.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
The Use of Molecular Oxygen in Pharmaceutical Manufacturing: Is Flow the Way to Go?
Hone, Christopher A; Roberge, Dominique M; Kappe, C Oliver
2017-01-10
Molecular oxygen is arguably the greenest reagent available to the organic chemist. Most commonly, a diluted form of oxygen gas, consisting of less than 10 % O 2 in N 2 ("synthetic air"), is used in pharmaceutical and fine chemical batch manufacturing to effectively address safety concerns when handling molecular oxygen. Concentrations of O 2 in N 2 below 10 % are generally required to prevent the risk of combustions in the presence of flammable organic solvents ("limiting oxygen concentration"). Nonetheless, the use of pure oxygen is more efficient than using O 2 diluted with N 2 and can often provide enhanced reaction rates, resulting in significant improvements in product quality and process efficiency. This Concept takes into account recent studies to make the argument that, for liquid-phase aerobic oxidations, pure oxygen can indeed be handled safely on large scale by employing continuous-flow reactors, while also providing highly convincing synthetic and manufacturing benefits. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ridge Waveguide Structures in Magnesium-Doped Lithium Niobate
NASA Technical Reports Server (NTRS)
Himmer, Phillip; Battle, Philip; Suckow, William; Switzer, Greg
2011-01-01
This work proposes to establish the feasibility of fabricating isolated ridge waveguides in 5% MgO:LN. Ridge waveguides in MgO:LN will significantly improve power handling and conversion efficiency, increase photonic component integration, and be well suited to spacebased applications. The key innovation in this effort is to combine recently available large, high-photorefractive-damage-threshold, z-cut 5% MgO:LN with novel ridge fabrication techniques to achieve high-optical power, low-cost, high-volume manufacturing of frequency conversion structures. The proposed ridge waveguide structure should maintain the characteristics of the periodically poled bulk substrate, allowing for the efficient frequency conversion typical of waveguides and the high optical damage threshold and long lifetimes typical of the 5% doped bulk substrate. The low cost and large area of 5% MgO:LN wafers, and the improved performance of the proposed ridge waveguide structure, will enhance existing measurement capabilities as well as reduce the resources required to achieve high-performance specifications. The purpose of the ridge waveguides in MgO:LN is to provide platform technology that will improve optical power handling and conversion efficiency compared to existing waveguide technology. The proposed ridge waveguide is produced using standard microfabrication techniques. The approach is enabled by recent advances in inductively coupled plasma etchers and chemical mechanical planarization techniques. In conjunction with wafer bonding, this fabrication methodology can be used to create arbitrarily shaped waveguides allowing complex optical circuits to be engineered in nonlinear optical materials such as magnesium doped lithium niobate. Researchers here have identified NLO (nonlinear optical) ridge waveguide structures as having suitable value to be the leading frequency conversion structures. Its value is based on having the low-cost fabrication necessary to satisfy the challenging pricing requirements as well as achieve the power handling and other specifications in a suitably compact package.
Durham, Erin-Elizabeth A; Yu, Xiaxia; Harrison, Robert W
2014-12-01
Effective machine-learning handles large datasets efficiently. One key feature of handling large data is the use of databases such as MySQL. The freeware fuzzy decision tree induction tool, FDT, is a scalable supervised-classification software tool implementing fuzzy decision trees. It is based on an optimized fuzzy ID3 (FID3) algorithm. FDT 2.0 improves upon FDT 1.0 by bridging the gap between data science and data engineering: it combines a robust decisioning tool with data retention for future decisions, so that the tool does not need to be recalibrated from scratch every time a new decision is required. In this paper we briefly review the analytical capabilities of the freeware FDT tool and its major features and functionalities; examples of large biological datasets from HIV, microRNAs and sRNAs are included. This work shows how to integrate fuzzy decision algorithms with modern database technology. In addition, we show that integrating the fuzzy decision tree induction tool with database storage allows for optimal user satisfaction in today's Data Analytics world.
Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.
Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J
2017-01-01
There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.
Sarment: Python modules for HMM analysis and partitioning of sequences.
Guéguen, Laurent
2005-08-15
Sarment is a package of Python modules for easy building and manipulation of sequence segmentations. It provides efficient implementation of usual algorithms for hidden Markov Model computation, as well as for maximal predictive partitioning. Owing to its very large variety of criteria for computing segmentations, Sarment can handle many kinds of models. Because of object-oriented programming, the results of the segmentation are very easy tomanipulate.
Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane
NASA Technical Reports Server (NTRS)
Gera, Joseph; Bosworth, John T.
1987-01-01
Initial envelope clearance and subsequent flight testing of a new, fully augmented airplane with an extremely high degree of static instability can place unusual demands on the flight test approach. Previous flight test experience with these kinds of airplanes is very limited or nonexistent. The safe and efficient flight testing may be further complicated by a multiplicity of control effectors that may be present on this class of airplanes. This paper describes some novel flight test and analysis techniques in the flight dynamics and handling qualities area. These techniques were utilized during the initial flight envelope clearance of the X-29A aircraft and were largely responsible for the completion of the flight controls clearance program without any incidents or significant delays.
Mynodbcsv: lightweight zero-config database solution for handling very large CSV files.
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach--data stay mostly in the CSV files; "zero configuration"--no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Mynodbcsv: Lightweight Zero-Config Database Solution for Handling Very Large CSV Files
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: “no copy” approach – data stay mostly in the CSV files; “zero configuration” – no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results. PMID:25068261
Method of preparing and handling chopped plant materials
Bransby, David I.
2002-11-26
The method improves efficiency of harvesting, storage, transport, and feeding of dry plant material to animals, and is a more efficient method for harvesting, handling and transporting dry plant material for industrial purposes, such as for production of bioenergy, and composite panels.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Sewell, Christopher; Heitmann, Katrin
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less
Traffic sharing algorithms for hybrid mobile networks
NASA Technical Reports Server (NTRS)
Arcand, S.; Murthy, K. M. S.; Hafez, R.
1995-01-01
In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.
Ding, Xiuhua; Su, Shaoyong; Nandakumar, Kannabiran; Wang, Xiaoling; Fardo, David W
2014-01-01
Large-scale genetic studies are often composed of related participants, and utilizing familial relationships can be cumbersome and computationally challenging. We present an approach to efficiently handle sequencing data from complex pedigrees that incorporates information from rare variants as well as common variants. Our method employs a 2-step procedure that sequentially regresses out correlation from familial relatedness and then uses the resulting phenotypic residuals in a penalized regression framework to test for associations with variants within genetic units. The operating characteristics of this approach are detailed using simulation data based on a large, multigenerational cohort.
NASA Astrophysics Data System (ADS)
Thakur, Jay Krishna; Singh, Sudhir Kumar; Ekanthalu, Vicky Shettigondahalli
2017-07-01
Integration of remote sensing (RS), geographic information systems (GIS) and global positioning system (GPS) are emerging research areas in the field of groundwater hydrology, resource management, environmental monitoring and during emergency response. Recent advancements in the fields of RS, GIS, GPS and higher level of computation will help in providing and handling a range of data simultaneously in a time- and cost-efficient manner. This review paper deals with hydrological modeling, uses of remote sensing and GIS in hydrological modeling, models of integrations and their need and in last the conclusion. After dealing with these issues conceptually and technically, we can develop better methods and novel approaches to handle large data sets and in a better way to communicate information related with rapidly decreasing societal resources, i.e. groundwater.
2017-10-26
1 FINAL REPORT Converting Constant Volume, Multizone Air Handling Systems to Energy Efficient Variable Air Volume Multizone...Systems Energy and Water Projects Project Number: EW-201152 ERDC-CERL 26 October 2017 2 TABLE OF CONTENTS ACKNOWLEDGEMENTS...16 3.2.1 Energy Usage (Quantitative
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
eCOMPAGT – efficient Combination and Management of Phenotypes and Genotypes for Genetic Epidemiology
Schönherr, Sebastian; Weißensteiner, Hansi; Coassin, Stefan; Specht, Günther; Kronenberg, Florian; Brandstätter, Anita
2009-01-01
Background High-throughput genotyping and phenotyping projects of large epidemiological study populations require sophisticated laboratory information management systems. Most epidemiological studies include subject-related personal information, which needs to be handled with care by following data privacy protection guidelines. In addition, genotyping core facilities handling cooperative projects require a straightforward solution to monitor the status and financial resources of the different projects. Description We developed a database system for an efficient combination and management of phenotypes and genotypes (eCOMPAGT) deriving from genetic epidemiological studies. eCOMPAGT securely stores and manages genotype and phenotype data and enables different user modes with different rights. Special attention was drawn on the import of data deriving from TaqMan and SNPlex genotyping assays. However, the database solution is adjustable to other genotyping systems by programming additional interfaces. Further important features are the scalability of the database and an export interface to statistical software. Conclusion eCOMPAGT can store, administer and connect phenotype data with all kinds of genotype data and is available as a downloadable version at . PMID:19432954
HIGH EFFICIENCY STRUCTURAL FLOWTHROUGH ROTOR WITH ACTIVE FLAP CONTROL: VOLUME THREE: MARKET & TEAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
Comparative visualization of genetic and physical maps with Strudel.
Bayer, Micha; Milne, Iain; Stephen, Gordon; Shaw, Paul; Cardle, Linda; Wright, Frank; Marshall, David
2011-05-01
Data visualization can play a key role in comparative genomics, for example, underpinning the investigation of conserved synteny patterns. Strudel is a desktop application that allows users to easily compare both genetic and physical maps interactively and efficiently. It can handle large datasets from several genomes simultaneously, and allows all-by-all comparisons between these. Installers for Strudel are available for Windows, Linux, Solaris and Mac OS X at http://bioinf.scri.ac.uk/strudel/.
Security and Efficiency Concerns With Distributed Collaborative Networking Environments
2003-09-01
have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while
Delegating work in primary care: a false ideal?
Dobson, Gregory; Pinker, Edieal J
2009-01-01
Primary care physicians are advised to delegate as much work as possible to support staff enabling them to serve larger patient panels and handle more patient visits, and thus generate more revenue. We explain that this advice is based on several fallacies and show evidence that dividing work processes among different types of support staff actually reduces productivity and profitability of primary care practices. We conclude that the efficient operation of large practices requires sophisticated practice management skills.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
2017-10-26
30. Energy Information Agency Natural Gas Price Data ..................................................................................... 65 Figure...different market sectors (residential, commercial, and industrial). Figure 30. Energy Information Agency Natural Gas Price Data 7.2.3 AHU Size...1 FINAL REPORT Converting Constant Volume, Multizone Air Handling Systems to Energy Efficient Variable Air Volume Multizone
FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)
NASA Astrophysics Data System (ADS)
Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.
2011-04-01
A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. Burgess; M. Noakes; P. Spampinato
This paper presents an evaluation of robotics and remote handling technologies that have the potential to increase the efficiency of handling waste packages at the proposed Yucca Mountain High-Level Nuclear Waste Repository. It is expected that increased efficiency will reduce the cost of operations. The goal of this work was to identify technologies for consideration as potential projects that the U.S. Department of Energy Office of Civilian Radioactive Waste Management, Office of Science and Technology International Programs, could support in the near future, and to assess their ''payback'' value. The evaluation took into account the robotics and remote handling capabilitiesmore » planned for incorporation into the current baseline design for the repository, for both surface and subsurface operations. The evaluation, completed at the end of fiscal year 2004, identified where significant advantages in operating efficiencies could accrue by implementing any given robotics technology or approach, and included a road map for a multiyear R&D program for improvements to remote handling technology that support operating enhancements.« less
High-frequency asymptotic methods for analyzing the EM scattering by open-ended waveguide cavities
NASA Technical Reports Server (NTRS)
Burkholder, R. J.; Pathak, P. H.
1989-01-01
Four high-frequency methods are described for analyzing the electromagnetic (EM) scattering by electrically large open-ended cavities. They are: (1) a hybrid combination of waveguide modal analysis and high-frequency asymptotics, (2) geometrical optics (GO) ray shooting, (3) Gaussian beam (GB) shooting, and (4) the generalized ray expansion (GRE) method. The hybrid modal method gives very accurate results but is limited to cavities which are made up of sections of uniform waveguides for which the modal fields are known. The GO ray shooting method can be applied to much more arbitrary cavity geometries and can handle absorber treated interior walls, but it generally only predicts the major trends of the RCS pattern and not the details. Also, a very large number of rays need to be tracked for each new incidence angle. Like the GO ray shooting method, the GB shooting method can handle more arbitrary cavities, but it is much more efficient and generally more accurate than the GO method because it includes the fields diffracted by the rim at the open end which enter the cavity. However, due to beam divergence effects the GB method is limited to cavities which are not very long compared to their width. The GRE method overcomes the length-to-width limitation of the GB method by replacing the GB's with GO ray tubes which are launched in the same manner as the GB's to include the interior rim diffracted field. This method gives good accuracy and is generally more efficient than the GO method, but a large number of ray tubes needs to be tracked.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Technology Assessment for Large Vertical-Lift Transport Tiltrotors
NASA Technical Reports Server (NTRS)
Germanowski, Peter J.; Stille, Brandon L.; Strauss, Michael P.
2010-01-01
The technical community has identified rotor efficiency as a critical enabling technology for large vertical-lift transport (LVLT) rotorcraft. The size and performance of LVLT aircraft will be far beyond current aircraft capabilities, enabling a transformational change in cargo transport effectiveness. Two candidate approaches for achieving high efficiency were considered for LVLT applications: a variable-diameter tiltrotor (VDTR) and a variable-speed tiltrotor (VSTR); the former utilizes variable-rotor geometry and the latter utilizes variable-rotor speed. Conceptual aircraft designs were synthesized for the VDTR and VSTR and compared to a conventional tiltrotor (CTR). The aircraft were optimized to a common objective function and bounded by a set of physical- and requirements-driven constraints. The resulting aircraft were compared for weight, size, performance, handling qualities, and other attributes. These comparisons established a measure of the relative merits of the variable-diameter and -speed rotor systems as enabling technologies for LVLT capability.
An Adjoint-Based Approach to Study a Flexible Flapping Wing in Pitching-Rolling Motion
NASA Astrophysics Data System (ADS)
Jia, Kun; Wei, Mingjun; Xu, Min; Li, Chengyu; Dong, Haibo
2017-11-01
Flapping-wing aerodynamics, with advantages in agility, efficiency, and hovering capability, has been the choice of many flyers in nature. However, the study of bio-inspired flapping-wing propulsion is often hindered by the problem's large control space with different wing kinematics and deformation. The adjoint-based approach reduces largely the computational cost to a feasible level by solving an inverse problem. Facing the complication from moving boundaries, non-cylindrical calculus provides an easy extension of traditional adjoint-based approach to handle the optimization involving moving boundaries. The improved adjoint method with non-cylindrical calculus for boundary treatment is first applied on a rigid pitching-rolling plate, then extended to a flexible one with active deformation to further increase its propulsion efficiency. The comparison of flow dynamics with the initial and optimal kinematics and deformation provides a unique opportunity to understand the flapping-wing mechanism. Supported by AFOSR and ARL.
Report of the ultraviolet and visible sensors panel
NASA Technical Reports Server (NTRS)
Timothy, J. Gethyn; Blouke, M.; Bredthauer, R.; Kimble, R.; Lee, T.-H.; Lesser, M.; Siegmund, O.; Weckler, G.
1991-01-01
In order to meet the science objectives of the Astrotech 21 mission set the Ultraviolet (UV) and Visible Sensors Panel made a number of recommendations. In the UV wavelength range of 0.01 to 0.3 micro-m the focus is on the need for large format high quantum efficiency, radiation hard 'solar-blind' detectors. Options recommended for support include Si and non-Si charge coupled devices (CCDs) as well as photocathodes with improved microchannel plate readouts. For the 0.3 to 0.9 micro-m range, it was felt that Si CCDs offer the best option for high quantum efficiencies at these wavelengths. In the 0.9 to 2.5 micro-m the panel recommended support for the investigation of monolithic arrays. Finally, the panel noted that the implementation of very large arrays will require new data transmission, data recording, and data handling technologies.
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.
1999-11-01
A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.
Efficient halal bleeding, animal handling, and welfare: A holistic approach for meat quality.
Aghwan, Z A; Bello, A U; Abubakar, A A; Imlan, J C; Sazili, A Q
2016-11-01
Traditional halal slaughter and other forms of religious slaughter are still an issue of debate. Opposing arguments related to pre-slaughter handling, stress and pain associated with restraint, whether the incision is painful or not, and the onset of unconsciousness have been put forward, but no consensus has been achieved. There is a need to strike a balance between halal bleeding in the light of science and animal welfare. There is a paucity of scientific data with respect to animal welfare, particularly the use of restraining devices, animal handling, and efficient halal bleeding. However, this review found that competent handling of animals, proper use of restraining devices, and the efficient bleeding process that follows halal slaughter maintains meat eating quality. In conclusion, halal bleeding, when carried out in accordance with recommended animal welfare procedures, will not only maintain the quality and wholesomeness of meat but could also potentially reduce suffering and pain. Maintained meat quality increases consumer satisfaction and food safety. Copyright © 2016. Published by Elsevier Ltd.
An abstraction layer for efficient memory management of tabulated chemistry and flamelet solutions
NASA Astrophysics Data System (ADS)
Weise, Steffen; Messig, Danny; Meyer, Bernd; Hasse, Christian
2013-06-01
A large number of methods for simulating reactive flows exist, some of them, for example, directly use detailed chemical kinetics or use precomputed and tabulated flame solutions. Both approaches couple the research fields computational fluid dynamics and chemistry tightly together using either an online or offline approach to solve the chemistry domain. The offline approach usually involves a method of generating databases or so-called Lookup-Tables (LUTs). As these LUTs are extended to not only contain material properties but interactions between chemistry and turbulent flow, the number of parameters and thus dimensions increases. Given a reasonable discretisation, file sizes can increase drastically. The main goal of this work is to provide methods that handle large database files efficiently. A Memory Abstraction Layer (MAL) has been developed that handles requested LUT entries efficiently by splitting the database file into several smaller blocks. It keeps the total memory usage at a minimum using thin allocation methods and compression to minimise filesystem operations. The MAL has been evaluated using three different test cases. The first rather generic one is a sequential reading operation on an LUT to evaluate the runtime behaviour as well as the memory consumption of the MAL. The second test case is a simulation of a non-premixed turbulent flame, the so-called HM1 flame, which is a well-known test case in the turbulent combustion community. The third test case is a simulation of a non-premixed laminar flame as described by McEnally in 1996 and Bennett in 2000. Using the previously developed solver 'flameletFoam' in conjunction with the MAL, memory consumption and the performance penalty introduced were studied. The total memory used while running a parallel simulation was reduced significantly while the CPU time overhead associated with the MAL remained low.
Efficient Planning of Wind-Optimal Routes in North Atlantic Oceanic Airspace
NASA Technical Reports Server (NTRS)
Rodionova, Olga; Sridhar, Banavar
2017-01-01
The North Atlantic oceanic airspace (NAT) is crossed daily by more than a thousand flights, which are greatly affected by strong jet stream air currents. Several studies devoted to generating wind-optimal (WO) aircraft trajectories in the NAT demonstrated great efficiency of such an approach for individual flights. However, because of the large separation norms imposed in the NAT, previously proposed WO trajectories induce a large number of potential conflicts. Much work has been done on strategic conflict detection and resolution (CDR) in the NAT. The work presented here extends previous methods and attempts to take advantage of the NAT traffic structure to simplify the problem and improve the results of CDR. Four approaches are studied in this work: 1) subdividing the existing CDR problem into sub-problems of smaller sizes, which are easier to handle; 2) more efficient data reorganization within the considered time period; 3) problem localization, i.e. concentrating the resolution effort in the most conflicted regions; 4) applying CDR to the pre-tactical decision horizon (a couple of hours in advance). Obtained results show that these methods efficiently resolve potential conflicts at the strategic and pre-tactical levels by keeping the resulting trajectories close to the initial WO ones.
Automatic sequential fluid handling with multilayer microfluidic sample isolated pumping
Liu, Jixiao; Fu, Hai; Yang, Tianhang; Li, Songjing
2015-01-01
To sequentially handle fluids is of great significance in quantitative biology, analytical chemistry, and bioassays. However, the technological options are limited when building such microfluidic sequential processing systems, and one of the encountered challenges is the need for reliable, efficient, and mass-production available microfluidic pumping methods. Herein, we present a bubble-free and pumping-control unified liquid handling method that is compatible with large-scale manufacture, termed multilayer microfluidic sample isolated pumping (mμSIP). The core part of the mμSIP is the selective permeable membrane that isolates the fluidic layer from the pneumatic layer. The air diffusion from the fluidic channel network into the degassing pneumatic channel network leads to fluidic channel pressure variation, which further results in consistent bubble-free liquid pumping into the channels and the dead-end chambers. We characterize the mμSIP by comparing the fluidic actuation processes with different parameters and a flow rate range of 0.013 μl/s to 0.097 μl/s is observed in the experiments. As the proof of concept, we demonstrate an automatic sequential fluid handling system aiming at digital assays and immunoassays, which further proves the unified pumping-control and suggests that the mμSIP is suitable for functional microfluidic assays with minimal operations. We believe that the mμSIP technology and demonstrated automatic sequential fluid handling system would enrich the microfluidic toolbox and benefit further inventions. PMID:26487904
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
MetReS, an Efficient Database for Genomic Applications.
Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc
2018-02-01
MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.
Dong, Xin; Zhang, Xinyi; Zeng, Siyu
2017-04-01
In the context of sustainable development, there has been an increasing requirement for an eco-efficiency assessment of wastewater treatment plants (WWTPs). Data envelopment analysis (DEA), a technique that is widely applied for relative efficiency assessment, is used in combination with the tolerances approach to handle WWTPs' multiple inputs and outputs as well as their uncertainty. The economic cost, energy consumption, contaminant removal, and global warming effect during the treatment processes are integrated to interpret the eco-efficiency of WWTPs. A total of 736 sample plants from across China are assessed, and large sensitivities to variations in inputs and outputs are observed for most samples, with only three WWTPs identified as being stably efficient. Size of plant, overcapacity, climate type, and influent characteristics are proven to have a significant influence on both the mean efficiency and performance sensitivity of WWTPs, while no clear relationships were found between eco-efficiency and technology under the framework of uncertainty analysis. The incorporation of uncertainty quantification and environmental impact consideration has improved the liability and applicability of the assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mooring and ground handling rigid airships
NASA Technical Reports Server (NTRS)
Walker, H., Jr.
1975-01-01
The problems of mooring and ground handling rigid airships are discussed. A brief history of Mooring and Ground Handling Rigid Airships from July 2, 1900 through September 1, 1939 is included. Also a brief history of ground handling developments with large U. S. Navy nonrigid airships between September 1, 1939 and August 31, 1962 is included wherein developed equipment and techniques appear applicable to future large rigid airships. Finally recommendations are made pertaining to equipment and procedures which appear desirable and feasible for future rigid airship programs.
Systems identification technology development for large space systems
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1982-01-01
A methodology for synthesizinng systems identification, both parameter and state, estimation and related control schemes for flexible aerospace structures is developed with emphasis on the Maypole hoop column antenna as a real world application. Modeling studies of the Maypole cable hoop membrane type antenna are conducted using a transfer matrix numerical analysis approach. This methodology was chosen as particularly well suited for handling a large number of antenna configurations of a generic type. A dedicated transfer matrix analysis, both by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, is significantly more efficient not only in computer time required but, more importantly, in the time needed to review and interpret the results.
Aerodynamic Limits on Large Civil Tiltrotor Sizing and Efficiency
NASA Technical Reports Server (NTRS)
Acree, C W., Jr.
2014-01-01
The NASA Large Civil Tiltrotor (2nd generation, or LCTR2) has been the reference design for avariety of NASA studies of design optimization, engine and gearbox technology, handling qualities, andother areas, with contributions from NASA Ames, Glenn and Langley Centers, plus academic and industrystudies. Ongoing work includes airfoil design, 3D blade optimization, engine technology studies, andwingrotor aerodynamic interference. The proposed paper will bring the design up to date with the latestresults of such studies, then explore the limits of what aerodynamic improvements might hope toaccomplish. The purpose is two-fold: 1) determine where future technology studies might have the greatestpayoff, and 2) establish a stronger basis of comparison for studies of other vehicle configurations andmissions.
Study of a hybrid multispectral processor
NASA Technical Reports Server (NTRS)
Marshall, R. E.; Kriegler, F. J.
1973-01-01
A hybrid processor is described offering enough handling capacity and speed to process efficiently the large quantities of multispectral data that can be gathered by scanner systems such as MSDS, SKYLAB, ERTS, and ERIM M-7. Combinations of general-purpose and special-purpose hybrid computers were examined to include both analog and digital types as well as all-digital configurations. The current trend toward lower costs for medium-scale digital circuitry suggests that the all-digital approach may offer the better solution within the time frame of the next few years. The study recommends and defines such a hybrid digital computing system in which both special-purpose and general-purpose digital computers would be employed. The tasks of recognizing surface objects would be performed in a parallel, pipeline digital system while the tasks of control and monitoring would be handled by a medium-scale minicomputer system. A program to design and construct a small, prototype, all-digital system has been started.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Murphy, K. J.; Baynes, K.; Lynnes, C.
2016-12-01
With the volume of Earth observation data expanding rapidly, cloud computing is quickly changing the way Earth observation data is processed, analyzed, and visualized. The cloud infrastructure provides the flexibility to scale up to large volumes of data and handle high velocity data streams efficiently. Having freely available Earth observation data collocated on a cloud infrastructure creates opportunities for innovation and value-added data re-use in ways unforeseen by the original data provider. These innovations spur new industries and applications and spawn new scientific pathways that were previously limited due to data volume and computational infrastructure issues. NASA, in collaboration with Amazon, Google, and Microsoft, have jointly developed a set of recommendations to enable efficient transfer of Earth observation data from existing data systems to a cloud computing infrastructure. The purpose of these recommendations is to provide guidelines against which all data providers can evaluate existing data systems and be used to improve any issues uncovered to enable efficient search, access, and use of large volumes of data. Additionally, these guidelines ensure that all cloud providers utilize a common methodology for bulk-downloading data from data providers thus preventing the data providers from building custom capabilities to meet the needs of individual cloud providers. The intent is to share these recommendations with other Federal agencies and organizations that serve Earth observation to enable efficient search, access, and use of large volumes of data. Additionally, the adoption of these recommendations will benefit data users interested in moving large volumes of data from data systems to any other location. These data users include the cloud providers, cloud users such as scientists, and other users working in a high performance computing environment who need to move large volumes of data.
NASA Astrophysics Data System (ADS)
Hotaling, Nathan A.; Khristov, Vladimir; Maminishkis, Arvydas; Bharti, Kapil; Simon, Carl G.
2017-10-01
A scaffold handling device (SHD) has been designed that can switch from gentle suction to positive pressure to lift and place nanofiber scaffolds. In tissue engineering laboratories, delicate fibrous scaffolds, such as electrospun nanofiber scaffolds, are often used as substrates for cell culture. Typical scaffold handling procedures include lifting the scaffolds, moving them from one container to another, sterilization, and loading scaffolds into cell culture plates. Using tweezers to handle the scaffolds can be slow, can damage the scaffolds, and can cause them to wrinkle or fold. Scaffolds may also acquire a static charge which makes them difficult to put down as they cling to tweezers. An SHD has been designed that enables more efficient, gentle lifting, and placement of delicate scaffolds. Most of the parts to make the SHD can be purchased, except for the tip which can be 3D-printed. The SHD enables more reliable handling of nanofiber scaffolds that may improve the consistency of biomanufacturing processes.
Ergonomic material-handling device
Barsnick, Lance E.; Zalk, David M.; Perry, Catherine M.; Biggs, Terry; Tageson, Robert E.
2004-08-24
A hand-held ergonomic material-handling device capable of moving heavy objects, such as large waste containers and other large objects requiring mechanical assistance. The ergonomic material-handling device can be used with neutral postures of the back, shoulders, wrists and knees, thereby reducing potential injury to the user. The device involves two key features: 1) gives the user the ability to adjust the height of the handles of the device to ergonomically fit the needs of the user's back, wrists and shoulders; and 2) has a rounded handlebar shape, as well as the size and configuration of the handles which keep the user's wrists in a neutral posture during manipulation of the device.
Data handling and representation of freeform surfaces
NASA Astrophysics Data System (ADS)
Steinkopf, Ralf; Dick, Lars; Kopf, Tino; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona
2011-10-01
Freeform surfaces enable innovative optics. They are not limited by axis symmetry and hence they are almost free in design. They are used to reduce the installation space and enhance the performance of optical elements. State of the art optical design tools are computing with powerful algorithms to simulate freeform surfaces. Even new mathematical approaches are under development /1/. In consequence, new optical designs /2/ are pushing the development of manufacturing processes consequently and novel types of datasets have to proceed through the process chain /3/. The complexity of these data is the huge challenge for the data handling. Because of the asymmetrical and 3-dimensional surfaces of freeforms, large data volumes have to be created, trimmed, extended and fitted. All these processes must be performed without losing the accuracy of the original design data. Additionally, manifold types of geometries results in different kinds of mathematical representations of freeform surfaces and furthermore the used CAD/CAM tools are dealing with a set of spatial transport formats. These are all reasons why manufacture-oriented approaches for the freeform data handling are not yet sufficiently developed. This paper suggests a classification of freeform surfaces based on the manufacturing methods which are offered by diamond machining. The different manufacturing technologies, ranging from servo-turning to shaping, require a differentiated approach for the data handling process. The usage of analytical descriptions in form of splines and polynomials as well as the application of discrete descriptions like point clouds is shown in relation to the previously made classification. Advantages and disadvantages of freeform representations are discussed. Aspects of the data handling in between different process steps are pointed out and suitable exchange formats for freeform data are proposed. The described approach offers the possibility for efficient data handling from optical design to systems in novel optics.
NASA Astrophysics Data System (ADS)
Hino, Hisato; Hoshino, Satoshi; Fujisawa, Tomoharu; Maruyama, Shigehisa; Ota, Jun
Currently, container ships move cargo with minimal participation from external trucks. However, there is slack time between the departure of container ships and the completion of cargo handling by container ships without the participation of external trucks; therefore, external trucks can be used to move cargo without delaying the departure time. In this paper, we propose a solution involving the control algorithms of transfer cranes (TCs) because the efficiency of yard operations depends largely on the productivity of TCs. TCs work according to heuristic rules using the forecasted arrival times of internal and external trucks. Simulation results show that the proposed method can reduce the waiting time of external trucks and meet the departure time of container ships.
Earth resources sensor data handling system: NASA JSC version
NASA Technical Reports Server (NTRS)
1974-01-01
The design of the NASA JSC data handling system is presented. Data acquisition parameters and computer display formats and the flow of image data through the system, with recommendations for improving system efficiency are discussed along with modifications to existing data handling procedures which will allow utilization of data duplication techniques and the accurate identification of imagery.
Gogoi, Parikshit; Zhang, Zhe; Geng, Zhishuai; Liu, Wei; Hu, Weize; Deng, Yulin
2018-03-22
The pretreatment of lignocellulosic biomass plays a vital role in the conversion of cellulosic biomass to bioethanol, especially for softwoods and hardwoods. Although many pretreatment technologies have been reported so far, only a few pretreatment methods can handle large woodchips directly. To improve the efficiency of pretreatment, existing technologies require the grinding of the wood into small particles, which is an energy-consuming process. Herein, for the first time, we report a simple, effective, and low-temperature (≈100 °C) process for the pretreatment of hardwood (HW) and softwood (SW) chips directly by using a catalytic system of FeCl 3 /NaNO 3 (FCSNRC). The pretreatment experiments were conducted systematically, and a conversion of 71.53 and 70.66 % of cellulose to sugar could be obtained for the direct use of large HW and SW chips. The new method reported here overcomes one of the critical barriers in biomass-to-biofuel conversion, and both grinding and thermal energies can be reduced significantly. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
Reinforced dynamics for enhanced sampling in large atomic and molecular systems
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Wang, Han; E, Weinan
2018-03-01
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
Data Processing Factory for the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan
2002-12-01
The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.
Ergonomics of disposable handles for minimally invasive surgery.
Büchel, D; Mårvik, R; Hallabrin, B; Matern, U
2010-05-01
The ergonomic deficiencies of currently available minimally invasive surgery (MIS) instrument handles have been addressed in many studies. In this study, a new ergonomic pistol handle concept, realized as a prototype, and two disposable ring handles were investigated according to ergonomic properties set by new European standards. In this study, 25 volunteers performed four practical tasks to evaluate the ergonomics of the handles used in standard operating procedures (e.g., measuring a suture and cutting to length, precise maneuvering and targeting, and dissection of a gallbladder). Moreover, 20 participants underwent electromyography (EMG) tests to measure the muscle strain they experienced while carrying out the basic functions (grasp, rotate, and maneuver) in the x, y, and z axes. The data measured included the number of errors, the time required for task completion, perception of pressure areas, and EMG data. The values for usability in the test were effectiveness, efficiency, and user satisfaction. Surveys relating to the subjective rating were completed after each task for each of the three handles tested. Each handle except the new prototype caused pressure areas and pain. Extreme differences in muscle strain could not be observed for any of the three handles. Experienced surgeons worked more quickly with the prototype when measuring and cutting a suture (approximately 20%) and during precise maneuvering and targeting (approximately 20%). On the other hand, they completed the dissection task faster with the handle manufactured by Ethicon. Fewer errors were made with the prototype in dissection of the gallbladder. In contrast to the handles available on the market, the prototype was always rated as positive by the volunteers in the subjective surveys. None of the handles could fulfil all of the requirements with top scores. Each handle had its advantages and disadvantages. In contrast to the ring handles, the volunteers could fulfil most of the tasks more efficiently using the prototype handle without any remarkable pressure areas, cramps, or pain.
NASA Astrophysics Data System (ADS)
Yen, Yi-Fen; Bowman, J. D.; Bolton, R. D.; Crawford, B. E.; Delheij, P. P. J.; Hart, G. W.; Haseyama, T.; Frankle, C. M.; Iinuma, M.; Knudson, J. N.; Masaike, A.; Masuda, Y.; Matsuda, Y.; Mitchell, G. E.; Penttilä, S. I.; Roberson, N. R.; Seestrom, S. J.; Sharapov, E.; Shimizu, H. M.; Smith, D. A.; Stephenson, S. L.; Szymanski, J. J.; Yoo, S. H.; Yuan, V. W.
2000-06-01
We have developed a large-area 10B-loaded liquid scintillation detector for parity-violation studies in neutron resonances with high instantaneous neutron fluxes from the LANSCE short-pulse spallation source. The detector has an efficiency of 95%, 85% and 71% at neutron energies of 10, 100 and 1000 eV, respectively. The neutron mean capture time in the detector is (416±5) ns. We describe the detector and the current-mode signal processing system, that can handle neutron rates up to 500 MHz.
Comparative visualization of genetic and physical maps with Strudel
Bayer, Micha; Milne, Iain; Stephen, Gordon; Shaw, Paul; Cardle, Linda; Wright, Frank; Marshall, David
2011-01-01
Summary: Data visualization can play a key role in comparative genomics, for example, underpinning the investigation of conserved synteny patterns. Strudel is a desktop application that allows users to easily compare both genetic and physical maps interactively and efficiently. It can handle large datasets from several genomes simultaneously, and allows all-by-all comparisons between these. Availability and implementation: Installers for Strudel are available for Windows, Linux, Solaris and Mac OS X at http://bioinf.scri.ac.uk/strudel/. Contact: strudel@scri.ac.uk; micha.bayer@scri.ac.uk PMID:21372085
jmzML, an open-source Java API for mzML, the PSI standard for MS data.
Côté, Richard G; Reisinger, Florian; Martens, Lennart
2010-04-01
We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.
NASA Technical Reports Server (NTRS)
Doggett, William R.; Dorsey, John T.; Collins, Timothy J.; King, Bruce D.; Mikulas, Martin M., Jr.
2008-01-01
Devices for lifting and transporting payloads and material are critical for efficient Earth-based construction operations. Devices with similar functionality will be needed to support lunar-outpost construction, servicing, inspection, regolith excavation, grading and payload placement. Past studies have proposed that only a few carefully selected devices are required for a lunar outpost. One particular set of operations involves lifting and manipulating payloads in the 100 kg to 3,000 kg range, which are too large or massive to be handled by unassisted astronauts. This paper will review historical devices used for payload handling in space and on earth to derive a set of desirable features for a device that can be used on planetary surfaces. Next, an innovative concept for a lifting device is introduced, which includes many of the desirable features. The versatility of the device is discussed, including its application to lander unloading, servicing, inspection, regolith excavation and site preparation. Approximate rules, which can be used to size the device for specific payload mass and reach requirements, are provided. Finally, details of a test-bed implementation of the innovative concept, which will be used to validate the structural design and develop operational procedures, is provided.
A quick and effective method of limb preparation with health, safety and efficiency benefits.
Naderi, N; Maw, K; Thomas, M; Boyce, D E; Shokrollahi, K
2012-03-01
Pre-operative limb preparation (PLP) usually involves lifting the limb and holding it in a fixed 'static' posture for several minutes. This is hazardous to theatre staff. Furthermore, 'painting' the limb can be time consuming and difficult areas such as between toes and fingers may remain unsterile. We demonstrate the time efficiency and asepsis achieved using the 'sterile bag' preparation technique. An additional advantage is the ability to prepare and anaesthetise a limb prior to theatre, increasing efficiency substantially for units with a large throughput of cases, such as day-case hand surgery lists. We monitored the duration of PLP in 20 patients using the 'sterile bag' technique compared to 20 patients using a conventional 'painting' method. Additionally, microbiology samples acquired from prepared upper limbs of 27 sequential patients operated on by a single surgeon over a two-month period were sent for culture immediately prior to commencement of surgery. The mean duration of the 'sterile bag' PLP was significantly lower than that of the conventional method (24 seconds vs 85 seconds, p=0.045). The technique can take as little as ten seconds (n=1). Final microbiology reports showed no growth for any of the 27 patients from whom a culture sample was taken. The sterile bag technique is effective in achieving asepsis, has the potential to increase theatre efficiency and reduces manual handling hazards compared to the conventional method. It is now taught to all theatre staff in our hospital during manual handling training. It can be undertaken in approximately ten seconds with practice for the upper limb.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Blakey, John D; Guy, Debbie; Simpson, Carl; Fearn, Andrew; Cannaby, Sharon; Wilson, Petra
2012-01-01
Objectives The authors investigated if a wireless system of call handling and task management for out of hours care could replace a standard pager-based system and improve markers of efficiency, patient safety and staff satisfaction. Design Prospective assessment using both quantitative and qualitative methods, including interviews with staff, a standard satisfaction questionnaire, independent observation, data extraction from work logs and incident reporting systems and analysis of hospital committee reports. Setting A large teaching hospital in the UK. Participants Hospital at night co-ordinators, clinical support workers and junior doctors handling approximately 10 000 tasks requested out of hours per month. Outcome measures Length of hospital stay, incidents reported, co-ordinator call logging activity, user satisfaction questionnaire, staff interviews. Results Users were more satisfied with the new system (satisfaction score 62/90 vs 82/90, p=0.0080). With the new system over 70 h/week of co-ordinator time was released, and there were fewer untoward incidents related to handover and medical response (OR=0.30, p=0.02). Broad clinical measures (cardiac arrest calls for peri-arrest situations and length of hospital stay) improved significantly in the areas covered by the new system. Conclusions The introduction of call handling software and mobile technology over a medical-grade wireless network improved staff satisfaction with the Hospital at Night system. Improvements in efficiency and information flow have been accompanied by a reduction in untoward incidents, length of stay and peri-arrest calls. PMID:22466035
Faulds, M C; Bauchmuller, K; Miller, D; Rosser, J H; Shuker, K; Wrench, I; Wilson, P; Mills, G H
2016-01-01
Large-scale audit and research projects demand robust, efficient systems for accurate data collection, handling and analysis. We utilised a multiplatform 'bring your own device' (BYOD) electronic data collection app to capture observational audit data on theatre efficiency across seven hospital Trusts in South Yorkshire in June-August 2013. None of the participating hospitals had a dedicated information governance policy for bring your own device. Data were collected by 17 investigators for 392 individual theatre lists, capturing 14,148 individual data points, 12, 852 (91%) of which were transmitted to a central database on the day of collection without any loss of data. BYOD technology enabled accurate collection of a large volume of secure data across multiple NHS organisations over a short period of time. Bring your own device technology provides a method for collecting real-time audit, research and quality improvement data within healthcare systems without compromising patient data protection. © 2015 The Association of Anaesthetists of Great Britain and Ireland.
An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.
Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome
2013-06-01
A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.
Parallelization of Nullspace Algorithm for the computation of metabolic pathways
Jevremović, Dimitrije; Trinh, Cong T.; Srienc, Friedrich; Sosa, Carlos P.; Boley, Daniel
2011-01-01
Elementary mode analysis is a useful metabolic pathway analysis tool in understanding and analyzing cellular metabolism, since elementary modes can represent metabolic pathways with unique and minimal sets of enzyme-catalyzed reactions of a metabolic network under steady state conditions. However, computation of the elementary modes of a genome- scale metabolic network with 100–1000 reactions is very expensive and sometimes not feasible with the commonly used serial Nullspace Algorithm. In this work, we develop a distributed memory parallelization of the Nullspace Algorithm to handle efficiently the computation of the elementary modes of a large metabolic network. We give an implementation in C++ language with the support of MPI library functions for the parallel communication. Our proposed algorithm is accompanied with an analysis of the complexity and identification of major bottlenecks during computation of all possible pathways of a large metabolic network. The algorithm includes methods to achieve load balancing among the compute-nodes and specific communication patterns to reduce the communication overhead and improve efficiency. PMID:22058581
Tensor Factorization for Low-Rank Tensor Completion.
Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao
2018-03-01
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
A New Algorithm Using the Non-Dominated Tree to Improve Non-Dominated Sorting.
Gustavsson, Patrik; Syberfeldt, Anna
2018-01-01
Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
C 3, A Command-line Catalog Cross-match Tool for Large Astrophysical Catalogs
NASA Astrophysics Data System (ADS)
Riccio, Giuseppe; Brescia, Massimo; Cavuoti, Stefano; Mercurio, Amata; di Giorgio, Anna Maria; Molinari, Sergio
2017-02-01
Modern Astrophysics is based on multi-wavelength data organized into large and heterogeneous catalogs. Hence, the need for efficient, reliable and scalable catalog cross-matching methods plays a crucial role in the era of the petabyte scale. Furthermore, multi-band data have often very different angular resolution, requiring the highest generality of cross-matching features, mainly in terms of region shape and resolution. In this work we present C 3 (Command-line Catalog Cross-match), a multi-platform application designed to efficiently cross-match massive catalogs. It is based on a multi-core parallel processing paradigm and conceived to be executed as a stand-alone command-line process or integrated within any generic data reduction/analysis pipeline, providing the maximum flexibility to the end-user, in terms of portability, parameter configuration, catalog formats, angular resolution, region shapes, coordinate units and cross-matching types. Using real data, extracted from public surveys, we discuss the cross-matching capabilities and computing time efficiency also through a direct comparison with some publicly available tools, chosen among the most used within the community, and representative of different interface paradigms. We verified that the C 3 tool has excellent capabilities to perform an efficient and reliable cross-matching between large data sets. Although the elliptical cross-match and the parametric handling of angular orientation and offset are known concepts in the astrophysical context, their availability in the presented command-line tool makes C 3 competitive in the context of public astronomical tools.
A Cost Effective Block Framing Scheme for Underwater Communication
Shin, Soo-Young; Park, Soo-Hyun
2011-01-01
In this paper, the Selective Multiple Acknowledgement (SMA) method, based on Multiple Acknowledgement (MA), is proposed to efficiently reduce the amount of data transmission by redesigning the transmission frame structure and taking into consideration underwater transmission characteristics. The method is suited to integrated underwater system models, as the proposed method can handle the same amount of data in a much more compact frame structure without any appreciable loss of reliability. Herein, the performance of the proposed SMA method was analyzed and compared to those of the conventional Automatic Repeat-reQuest (ARQ), Block Acknowledgement (BA), block response, and MA methods. The efficiency of the underwater sensor network, which forms a large cluster and mostly contains uplink data, is expected to be improved by the proposed method. PMID:22247689
Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.
Watanabe, Leandro; Myers, Chris J
2016-08-19
The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.
Llama medicine. Physical examination, restraint and handling.
Fowler, M
1989-03-01
Llamas are generally docile, easily handled domestic animals. Special chutes have been designed for safe, efficient restraint for examination and diagnostic procedures, most of which are commonly performed on other species. Anatomic differences make some of these procedures unique to the llama.
Zhu, Xiaoning
2014-01-01
Rail mounted gantry crane (RMGC) scheduling is important in reducing makespan of handling operation and improving container handling efficiency. In this paper, we present an RMGC scheduling optimization model, whose objective is to determine an optimization handling sequence in order to minimize RMGC idle load time in handling tasks. An ant colony optimization is proposed to obtain near optimal solutions. Computational experiments on a specific railway container terminal are conducted to illustrate the proposed model and solution algorithm. The results show that the proposed method is effective in reducing the idle load time of RMGC. PMID:25538768
Fuzzy Relational Compression Applied on Feature Vectors for Infant Cry Recognition
NASA Astrophysics Data System (ADS)
Reyes-Galaviz, Orion Fausto; Reyes-García, Carlos Alberto
Data compression is always advisable when it comes to handling and processing information quickly and efficiently. There are two main problems that need to be solved when it comes to handling data; store information in smaller spaces and processes it in the shortest possible time. When it comes to infant cry analysis (ICA), there is always the need to construct large sound repositories from crying babies. Samples that have to be analyzed and be used to train and test pattern recognition algorithms; making this a time consuming task when working with uncompressed feature vectors. In this work, we show a simple, but efficient, method that uses Fuzzy Relational Product (FRP) to compresses the information inside a feature vector, building with this a compressed matrix that will help us recognize two kinds of pathologies in infants; Asphyxia and Deafness. We describe the sound analysis, which consists on the extraction of Mel Frequency Cepstral Coefficients that generate vectors which will later be compressed by using FRP. There is also a description of the infant cry database used in this work, along with the training and testing of a Time Delay Neural Network with the compressed features, which shows a performance of 96.44% with our proposed feature vector compression.
Neutron/gamma pulse shape discrimination (PSD) in plastic scintillators with digital PSD electronics
NASA Astrophysics Data System (ADS)
Hutcheson, Anthony L.; Simonson, Duane L.; Christophersen, Marc; Phlips, Bernard F.; Charipar, Nicholas A.; Piqué, Alberto
2013-05-01
Pulse shape discrimination (PSD) is a common method to distinguish between pulses produced by gamma rays and neutrons in scintillator detectors. This technique takes advantage of the property of many scintillators that excitations by recoil protons and electrons produce pulses with different characteristic shapes. Unfortunately, many scintillating materials with good PSD properties have other, undesirable properties such as flammability, toxicity, low availability, high cost, and/or limited size. In contrast, plastic scintillator detectors are relatively low-cost, and easily handled and mass-produced. Recent studies have demonstrated efficient PSD in plastic scintillators using a high concentration of fluorescent dyes. To further investigate the PSD properties of such systems, mixed plastic scintillator samples were produced and tested. The addition of up to 30 wt. % diphenyloxazole (DPO) and other chromophores in polyvinyltoluene (PVT) results in efficient detection with commercial detectors. These plastic scintillators are produced in large diameters up to 4 inches by melt blending directly in a container suitable for in-line detector use. This allows recycling and reuse of materials while varying the compositions. This strategy also avoids additional sample handling and polishing steps required when using removable molds. In this presentation, results will be presented for different mixed-plastic compositions and compared with known scintillating materials
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities.
Troshin, Peter V; Postis, Vincent Lg; Ashworth, Denise; Baldwin, Stephen A; McPherson, Michael J; Barton, Geoffrey J
2011-03-07
Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/.
Improved thermodynamic modeling of the no-vent fill process and correlation with experimental data
NASA Technical Reports Server (NTRS)
Taylor, William J.; Chato, David J.
1991-01-01
The United States' plans to establish a permanent manned presence in space and to explore the Solar System created the need to efficiently handle large quantities of subcritical cryogenic fluids, particularly propellants such as liquid hydrogen and liquid oxygen, in low- to zero-gravity environments. One of the key technologies to be developed for fluid handling is the ability to transfer the cryogens between storage and spacecraft tanks. The no-vent fill method was identified as one way to perform this transfer. In order to understand how to apply this method, a model of the no-vent fill process is being developed and correlated with experimental data. The verified models then can be used to design and analyze configurations for tankage and subcritical fluid depots. The development of an improved macroscopic thermodynamic model is discussed of the no-vent fill process and the analytical results from the computer program implementation of the model are correlated with experimental results for two different test tanks.
Real-time analysis of healthcare using big data analytics
NASA Astrophysics Data System (ADS)
Basco, J. Antony; Senthilkumar, N. C.
2017-11-01
Big Data Analytics (BDA) provides a tremendous advantage where there is a need of revolutionary performance in handling large amount of data that covers 4 characteristics such as Volume Velocity Variety Veracity. BDA has the ability to handle such dynamic data providing functioning effectiveness and exceptionally beneficial output in several day to day applications for various organizations. Healthcare is one of the sectors which generate data constantly covering all four characteristics with outstanding growth. There are several challenges in processing patient records which deals with variety of structured and unstructured format. Inducing BDA in to Healthcare (HBDA) will deal with sensitive patient driven information mostly in unstructured format comprising of prescriptions, reports, data from imaging system, etc., the challenges will be overcome by big data with enhanced efficiency in fetching and storing of data. In this project, dataset alike Electronic Medical Records (EMR) produced from numerous medical devices and mobile applications will be induced into MongoDB using Hadoop framework with Improvised processing technique to improve outcome of processing patient records.
NASA Technical Reports Server (NTRS)
1969-01-01
Commercially available roller type desk pad provides an efficient and orderly manner of handling rolled paper tapes for proofreading. The fixture, which is modified to accept Flex-O-Writer or similar tapes and roll them in either direction, reduces the chance of damaging or soiling the tapes through repeated handling.
Handling Qualities of Large Flexible Aircraft. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Poopaka, S.
1980-01-01
The effects on handling qualities of elastic modes interaction with the rigid body dynamics of a large flexible aircraft are studied by a mathematical computer simulation. An analytical method to predict the pilot ratings when there is a severe modes interactions is developed. This is done by extending the optimal control model of the human pilot response to include the mode decomposition mechanism into the model. The handling qualities are determined for a longitudinal tracking task using a large flexible aircraft with parametric variations in the undamped natural frequencies of the two lowest frequency, symmetric elastic modes made to induce varying amounts of mode interaction.
SAR correlation technique - An algorithm for processing data with large range walk
NASA Technical Reports Server (NTRS)
Jin, M.; Wu, C.
1983-01-01
This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.
Bellman Ford algorithm - in Routing Information Protocol (RIP)
NASA Astrophysics Data System (ADS)
Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah
2018-04-01
In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Primates 2 Transportation Standards § 3.92 Handling. (a) Any person subject to the Animal Welfare regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) nonhuman primates within, to... and efficiently as possible, and must provide the following during movement of the nonhuman primate...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Primates 2 Transportation Standards § 3.92 Handling. (a) Any person subject to the Animal Welfare regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) nonhuman primates within, to... and efficiently as possible, and must provide the following during movement of the nonhuman primate...
Large-Scale Residential Demolition
The EPA provides resources for handling residential demolitions or renovations. This includes planning, handling harmful materials, recycling, funding, compliance assistance, good practices and regulations.
BLOOM: BLoom filter based oblivious outsourced matchings.
Ziegeldorf, Jan Henrik; Pennekamp, Jan; Hellmanns, David; Schwinger, Felix; Kunze, Ike; Henze, Martin; Hiller, Jens; Matzutt, Roman; Wehrle, Klaus
2017-07-26
Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose FHE-BLOOM and PHE-BLOOM, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. FHE-BLOOM is fully secure in the semi-honest model while PHE-BLOOM slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while PHE-BLOOM is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, FHE-BLOOM, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, PHE-BLOOM, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.
Dairy cow handling facilities and the perception of Beef Quality Assurance on Colorado dairies.
Adams, A E; Olea-Popelka, F J; Grandin, T; Woerner, D R; Roman-Muniz, I N
2014-02-01
A survey was conducted on Colorado dairies to assess attitudes and practices regarding Dairy Beef Quality Assurance (DBQA). The objectives were to (1) assess the need for a new handling facility that would allow all injections to be administered via DBQA standards; (2) establish if Colorado dairy producers are concerned with DBQA; and (3) assess differences in responses between dairy owners and herdsmen. Of the 95 dairies contacted, 20 (21%) agreed to participate, with a median herd size of 1,178. When asked to rank the following 7 traits--efficiency, animal safety, human safety, ease of animal handling, ease of operation, inject per Beef Quality Assurance (BQA) procedures, and cost--in order of priority when designing a new handling facility, human and animal safety were ranked highest in priority (first or second) by the majority of participants, with ease of animal handling and efficiency ranked next. Interestingly, the administration of injections per BQA standards was ranked sixth or seventh by most participants. Respondents estimated the average annual income from the sale of cull cows to be 4.6% of all dairy income, with 50% receiving at least one carcass discount or condemnation in the past 12 mo. Although almost all of the participating dairy farmers stated that the preferred injection site for medications was the neck region, a significant number admitted to using alternate injection sites. In contrast, no difference was found between responses regarding the preferred and actual location for intravenous injections. Although most participating producers are aware of BQA injection guidelines, they perceive efficiency as more important, which could result in injections being administered in locations not promoted by BQA. Dairy owners and herdsmen disagreed in whether or not workers had been injured in the animal handling area in the last 12 mo. Handling facilities that allow for an efficient and safe way to administer drugs according to BQA guidelines and educational opportunities that highlight the effect of improved DBQA on profitability could prove useful. Dairy producers play a key role in ensuring that dairy beef is safe and high quality, and just as they are committed to producing safe and nutritious milk for their customers, they should be committed to producing the best quality beef. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Efficient full wave code for the coupling of large multirow multijunction LH grills
NASA Astrophysics Data System (ADS)
Preinhaelter, Josef; Hillairet, Julien; Milanesio, Daniele; Maggiora, Riccardo; Urban, Jakub; Vahala, Linda; Vahala, George
2017-11-01
The full wave code OLGA, for determining the coupling of a single row lower hybrid launcher (waveguide grills) to the plasma, is extended to handle multirow multijunction active passive structures (like the C3 and C4 launchers on TORE SUPRA) by implementing the scattering matrix formalism. The extended code is still computationally fast because of the use of (i) 2D splines of the plasma surface admittance in the accessibility region of the k-space, (ii) high order Gaussian quadrature rules for the integration of the coupling elements and (iii) utilizing the symmetries of the coupling elements in the multiperiodic structures. The extended OLGA code is benchmarked against the ALOHA-1D, ALOHA-2D and TOPLHA codes for the coupling of the C3 and C4 TORE SUPRA launchers for several plasma configurations derived from reflectometry and interferometery. Unlike nearly all codes (except the ALOHA-1D code), OLGA does not require large computational resources and can be used for everyday usage in planning experimental runs. In particular, it is shown that the OLGA code correctly handles the coupling of the C3 and C4 launchers over a very wide range of plasma densities in front of the grill.
SUBGRADE MONOLITHIC ENCASEMENT STABILIZATION OF CATEGORY 3 LOW LEVEL WASTE (LLW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
PHILLIPS, S.J.
2004-02-03
A highly efficient and effective technology has been developed and is being used for stabilization of Hazard Category 3 low-level waste at the U.S. Department of Energy's Hanford Site. Using large, structurally interconnected monoliths, which form one large monolith that fills a waste disposal trench, the patented technology can be used for final internment of almost any hazardous, radioactive, or toxic waste or combinations of these waste materials packaged in a variety of sizes, shapes, and volumes within governmental regulatory limits. The technology increases waste volumetric loading by 100 percent, area use efficiency by 200 percent, and volumetric configuration efficiencymore » by more than 500 percent over past practices. To date, in excess of 2,010 m{sup 3} of contact-handled and remote-handled low-level radioactive waste have been interned using this patented technology. Additionally, in excess of 120 m{sup 3} of low-level radioactive waste requiring stabilization in low-diffusion coefficient waste encasement matrix has been disposed using this technology. Greater than five orders of magnitude in radiation exposure reduction have been noted using this method of encasement of Hazard Category 3 waste. Additionally, exposure monitored at all monolith locations produced by the slip form technology is less than 1.29 x E-07 C {center_dot} kg{sup -1}. Monolithic encasement of Hazard Category 3 low-level waste and other waste category materials may be successfully accomplished using this technology at nominally any governmental or private sector waste disposal facility. Additionally, other waste materials consisting of hazardous, radioactive, toxic, or mixed waste materials can be disposed of using the monolithic slip form encasement technology.« less
Volk, Carol J; Lucero, Yasmin; Barnas, Katie
2014-05-01
Increasingly, research and management in natural resource science rely on very large datasets compiled from multiple sources. While it is generally good to have more data, utilizing large, complex datasets has introduced challenges in data sharing, especially for collaborating researchers in disparate locations ("distributed research teams"). We surveyed natural resource scientists about common data-sharing problems. The major issues identified by our survey respondents (n = 118) when providing data were lack of clarity in the data request (including format of data requested). When receiving data, survey respondents reported various insufficiencies in documentation describing the data (e.g., no data collection description/no protocol, data aggregated, or summarized without explanation). Since metadata, or "information about the data," is a central obstacle in efficient data handling, we suggest documenting metadata through data dictionaries, protocols, read-me files, explicit null value documentation, and process metadata as essential to any large-scale research program. We advocate for all researchers, but especially those involved in distributed teams to alleviate these problems with the use of several readily available communication strategies including the use of organizational charts to define roles, data flow diagrams to outline procedures and timelines, and data update cycles to guide data-handling expectations. In particular, we argue that distributed research teams magnify data-sharing challenges making data management training even more crucial for natural resource scientists. If natural resource scientists fail to overcome communication and metadata documentation issues, then negative data-sharing experiences will likely continue to undermine the success of many large-scale collaborative projects.
NASA Astrophysics Data System (ADS)
Volk, Carol J.; Lucero, Yasmin; Barnas, Katie
2014-05-01
Increasingly, research and management in natural resource science rely on very large datasets compiled from multiple sources. While it is generally good to have more data, utilizing large, complex datasets has introduced challenges in data sharing, especially for collaborating researchers in disparate locations ("distributed research teams"). We surveyed natural resource scientists about common data-sharing problems. The major issues identified by our survey respondents ( n = 118) when providing data were lack of clarity in the data request (including format of data requested). When receiving data, survey respondents reported various insufficiencies in documentation describing the data (e.g., no data collection description/no protocol, data aggregated, or summarized without explanation). Since metadata, or "information about the data," is a central obstacle in efficient data handling, we suggest documenting metadata through data dictionaries, protocols, read-me files, explicit null value documentation, and process metadata as essential to any large-scale research program. We advocate for all researchers, but especially those involved in distributed teams to alleviate these problems with the use of several readily available communication strategies including the use of organizational charts to define roles, data flow diagrams to outline procedures and timelines, and data update cycles to guide data-handling expectations. In particular, we argue that distributed research teams magnify data-sharing challenges making data management training even more crucial for natural resource scientists. If natural resource scientists fail to overcome communication and metadata documentation issues, then negative data-sharing experiences will likely continue to undermine the success of many large-scale collaborative projects.
Code of Federal Regulations, 2013 CFR
2013-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Dogs and... regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) dogs or cats within, to, or... efficiently as possible and must provide the following during movement of the dog or cat: (1) Shelter from...
Code of Federal Regulations, 2014 CFR
2014-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Dogs and... regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) dogs or cats within, to, or... efficiently as possible and must provide the following during movement of the dog or cat: (1) Shelter from...
Code of Federal Regulations, 2012 CFR
2012-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Dogs and... regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) dogs or cats within, to, or... efficiently as possible and must provide the following during movement of the dog or cat: (1) Shelter from...
Code of Federal Regulations, 2011 CFR
2011-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Dogs and... regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) dogs or cats within, to, or... efficiently as possible and must provide the following during movement of the dog or cat: (1) Shelter from...
Code of Federal Regulations, 2010 CFR
2010-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Dogs and... regulations (9 CFR parts 1, 2, and 3) who moves (including loading and unloading) dogs or cats within, to, or... efficiently as possible and must provide the following during movement of the dog or cat: (1) Shelter from...
Defluoridation potential of jute fibers grafted with fatty acyl chain
NASA Astrophysics Data System (ADS)
Manna, Suvendu; Saha, Prosenjit; Roy, Debasis; Sen, Ramkrishna; Adhikari, Basudam
2015-11-01
Waterborne fluoride is usually removed from water by coagulation, adsorption, ion exchange, electro dialysis or reverse osmosis. These processes are often effective over narrow pH ranges, release ions considered hazardous to human health or produce large volumes of toxic sludge that are difficult to handle and dispose. Although plant matters have been shown to remove waterborne fluoride, they suffer from poor removal efficiency. Following from the insight that interaction between microbial carbohydrate biopolymers and anionic surfaces is often facilitated by lipids, an attempt has been made to enhance fluoride adsorption efficiency of jute by grafting the lignocellulosic fiber with fatty acyl chains found in vegetable oils. Fluoride removal efficiency of grafted jute was found to be comparable or higher than those of alternative defluoridation processes. Infrared and X-ray photoelectron spectroscopic evidence indicated that hydrogen bonding, protonation and C-F bonding were responsible for fluoride accumulation on grafted jute. Adsorption based on grafted jute fibers appears to be an economical, sustainable and eco-friendly alternative technique for removing waterborne fluoride.
An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, R; Stolken, J; Jannetti, C
Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numericalmore » simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.« less
Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company
NASA Technical Reports Server (NTRS)
Lores, M. E.
1978-01-01
Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Hampton, Paul M
2018-02-01
As body size increases, some predators eliminate small prey from their diet exhibiting an ontogenetic shift toward larger prey. In contrast, some predators show a telescoping pattern of prey size in which both large and small prey are consumed with increasing predator size. To explore a functional explanation for the two feeding patterns, I examined feeding effort as both handling time and number of upper jaw movements during ingestion of fish of consistent size. I used a range of body sizes from two snake species that exhibit ontogenetic shifts in prey size (Nerodia fasciata and N. rhombifer) and a species that exhibits telescoping prey size with increased body size (Thamnophis proximus). For the two Nerodia species, individuals with small or large heads exhibited greater difficulty in feeding effort compared to snakes of intermediate size. However, for T. proximus measures of feeding effort were negatively correlated with head length and snout-vent length (SVL). These data indicate that ontogenetic shifters of prey size develop trophic morphology large enough that feeding effort increases for disproportionately small prey. I also compared changes in body size among the two diet strategies for active foraging snake species using data gleaned from the literature to determine if increased change in body size and thereby feeding morphology is observable in snakes regardless of prey type or foraging habitat. Of the 30 species sampled from literature, snakes that exhibit ontogenetic shifts in prey size have a greater magnitude of change in SVL than species that have telescoping prey size patterns. Based upon the results of the two data sets above, I conclude that ontogenetic shifts away from small prey occur in snakes due, in part, to growth of body size and feeding structures beyond what is efficient for handling small prey. Copyright © 2017. Published by Elsevier GmbH.
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
A multi-user real time inventorying system for radioactive materials: a networking approach.
Mehta, S; Bandyopadhyay, D; Hoory, S
1998-01-01
A computerized system for radioisotope management and real time inventory coordinated across a large organization is reported. It handles hundreds of individual users and their separate inventory records. Use of highly efficient computer network and database technologies makes it possible to accept, maintain, and furnish all records related to receipt, usage, and disposal of the radioactive materials for the users separately and collectively. The system's central processor is an HP-9000/800 G60 RISC server and users from across the organization use their personal computers to login to this server using the TCP/IP networking protocol, which makes distributed use of the system possible. Radioisotope decay is automatically calculated by the program, so that it can make the up-to-date radioisotope inventory data of an entire institution available immediately. The system is specifically designed to allow use by large numbers of users (about 300) and accommodates high volumes of data input and retrieval without compromising simplicity and accuracy. Overall, it is an example of a true multi-user, on-line, relational database information system that makes the functioning of a radiation safety department efficient.
Evaluation of on-line pulse control for vibration suppression in flexible spacecraft
NASA Technical Reports Server (NTRS)
Masri, Sami F.
1987-01-01
A numerical simulation was performed, by means of a large-scale finite element code capable of handling large deformations and/or nonlinear behavior, to investigate the suitability of the nonlinear pulse-control algorithm to suppress the vibrations induced in the Spacecraft Control Laboratory Experiment (SCOLE) components under realistic maneuvers. Among the topics investigated were the effects of various control parameters on the efficiency and robustness of the vibration control algorithm. Advanced nonlinear control techniques were applied to an idealized model of some of the SCOLE components to develop an efficient algorithm to determine the optimal locations of point actuators, considering the hardware on the SCOLE project as distributed in nature. The control was obtained from a quadratic optimization criterion, given in terms of the state variables of the distributed system. An experimental investigation was performed on a model flexible structure resembling the essential features of the SCOLE components, and electrodynamic and electrohydraulic actuators were used to investigate the applicability of the control algorithm with such devices in addition to mass-ejection pulse generators using compressed air.
A Look at Technologies Vis-a-vis Information Handling Techniques.
ERIC Educational Resources Information Center
Swanson, Rowena W.
The paper examines several ideas for information handling implemented with new technologies that suggest directions for future development. These are grouped under the topic headings: Handling Large Data Banks, Providing Personalized Information Packages, Providing Information Specialist Services, and Expanding Man-Machine Interaction. Guides in…
Energy-efficient container handling using hybrid model predictive control
NASA Astrophysics Data System (ADS)
Xin, Jianbin; Negenborn, Rudy R.; Lodewijks, Gabriel
2015-11-01
The performance of container terminals needs to be improved to adapt the growth of containers while maintaining sustainability. This paper provides a methodology for determining the trajectory of three key interacting machines for carrying out the so-called bay handling task, involving transporting containers between a vessel and the stacking area in an automated container terminal. The behaviours of the interacting machines are modelled as a collection of interconnected hybrid systems. Hybrid model predictive control (MPC) is proposed to achieve optimal performance, balancing the handling capacity and energy consumption. The underlying control problem is hereby formulated as a mixed-integer linear programming problem. Simulation studies illustrate that a higher penalty on energy consumption indeed leads to improved sustainability using less energy. Moreover, simulations illustrate how the proposed energy-efficient hybrid MPC controller performs under different types of uncertainties.
NASA Astrophysics Data System (ADS)
Curletti, F.; Gandiglio, M.; Lanzini, A.; Santarelli, M.; Maréchal, F.
2015-10-01
This article investigates the techno-economic performance of large integrated biogas Solid Oxide Fuel Cell (SOFC) power plants. Both atmospheric and pressurized operation is analysed with CO2 vented or captured. The SOFC module produces a constant electrical power of 1 MWe. Sensitivity analysis and multi-objective optimization are the mathematical tools used to investigate the effects of Fuel Utilization (FU), SOFC operating temperature and pressure on the plant energy and economic performances. FU is the design variable that most affects the plant performance. Pressurized SOFC with hybridization with a gas turbine provides a notable boost in electrical efficiency. For most of the proposed plant configurations, the electrical efficiency ranges in the interval 50-62% (LHV biogas) when a trade-off of between energy and economic performances is applied based on Pareto charts obtained from multi-objective plant optimization. The hybrid SOFC is potentially able to reach an efficiency above 70% when FU is 90%. Carbon capture entails a penalty of more 10 percentage points in pressurized configurations mainly due to the extra energy burdens of captured CO2 pressurization and oxygen production and for the separate and different handling of the anode and cathode exhausts and power recovery from them.
Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.
Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae
2013-05-15
Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.
To repair or not to repair: with FAVOR there is no question
NASA Astrophysics Data System (ADS)
Garetto, Anthony; Schulz, Kristian; Tabbone, Gilles; Himmelhaus, Michael; Scheruebl, Thomas
2016-10-01
In the mask shop the challenges associated with today's advanced technology nodes, both technical and economic, are becoming increasingly difficult. The constant drive to continue shrinking features means more masks per device, smaller manufacturing tolerances and more complexity along the manufacturing line with respect to the number of manufacturing steps required. Furthermore, the extremely competitive nature of the industry makes it critical for mask shops to optimize asset utilization and processes in order to maximize their competitive advantage and, in the end, profitability. Full maximization of profitability in such a complex and technologically sophisticated environment simply cannot be achieved without the use of smart automation. Smart automation allows productivity to be maximized through better asset utilization and process optimization. Reliability is improved through the minimization of manual interactions leading to fewer human error contributions and a more efficient manufacturing line. In addition to these improvements in productivity and reliability, extra value can be added through the collection and cross-verification of data from multiple sources which provides more information about our products and processes. When it comes to handling mask defects, for instance, the process consists largely of time consuming manual interactions that are error prone and often require quick decisions from operators and engineers who are under pressure. The handling of defects itself is a multiple step process consisting of several iterations of inspection, disposition, repair, review and cleaning steps. Smaller manufacturing tolerances and features with higher complexity contribute to a higher number of defects which must be handled as well as a higher level of complexity. In this paper the recent efforts undertaken by ZEISS to provide solutions which address these challenges, particularly those associated with defectivity, will be presented. From automation of aerial image analysis to the use of data driven decision making to predict and propose the optimized back end of line process flow, productivity and reliability improvements are targeted by smart automation. Additionally the generation of the ideal aerial image from the design and several repair enhancement features offer additional capabilities to improve the efficiency and yield associated with defect handling.
Advanced Natural Gas Reciprocating Engine(s)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, Edward
The objective of the Cummins ARES program, in partnership with the US Department of Energy (DOE), is to develop advanced natural gas engine technologies that increase engine system efficiency at lower emissions levels while attaining lower cost of ownership. The goals of the project are to demonstrate engine system achieving 50% Brake Thermal Efficiency (BTE) in three phases, 44%, 47% and 50% (starting baseline efficiency at 36% BTE) and 0.1 g/bhp-hr NOx system out emissions (starting baseline NOx emissions at 2 – 4 g/bhp-hr NOx). Primary path towards above goals include high Brake Mean Effective Pressure (BMEP), improved closed cyclemore » efficiency, increased air handling efficiency and optimized engine subsystems. Cummins has successfully demonstrated each of the phases of this program. All targets have been achieved through application of a combined set of advanced base engine technologies and Waste Heat Recovery from Charge Air and Exhaust streams, optimized and validated on the demonstration engine and other large engines. The following architectures were selected for each Phase: Phase 1: Lean Burn Spark Ignited (SI) Key Technologies: High Efficiency Turbocharging, Higher Efficiency Combustion System. In production on the 60/91L engines. Over 500MW of ARES Phase 1 technology has been sold. Phase 2: Lean Burn Technology with Exhaust Waste Heat Recovery (WHR) System Key Technologies: Advanced Ignition System, Combustion Improvement, Integrated Waste Heat Recovery System. Base engine technologies intended for production within 2 to 3 years Phase 3: Lean Burn Technology with Exhaust and Charge Air Waste Heat Recovery System Key Technologies: Lower Friction, New Cylinder Head Designs, Improved Integrated Waste Heat Recovery System. Intended for production within 5 to 6 years Cummins is committed to the launch of next generation of large advanced NG engines based on ARES technology to be commercialized worldwide.« less
Unitizing goods on pallets and slipsheets
J. F. Laundrie
1986-01-01
Packaging, handling, and shipping methods and facilities have changed drastically since World War II. Today, most products are individually packaged and then combined into unitized loads for more efficient handling, storage, and shipping. The purpose of this manual is to promote the most effective use of wood and wood fiber in current packaging and shipping practices...
Oxygen-controlled automated neural differentiation of mouse embryonic stem cells.
Mondragon-Teran, Paul; Tostoes, Rui; Mason, Chris; Lye, Gary J; Veraitch, Farlan S
2013-03-01
Automation and oxygen tension control are two tools that provide significant improvements to the reproducibility and efficiency of stem cell production processes. the aim of this study was to establish a novel automation platform capable of controlling oxygen tension during both the cell-culture and liquid-handling steps of neural differentiation processes. We built a bespoke automation platform, which enclosed a liquid-handling platform in a sterile, oxygen-controlled environment. An airtight connection was used to transfer cell culture plates to and from an automated oxygen-controlled incubator. Our results demonstrate that our system yielded comparable cell numbers, viabilities, metabolism profiles and differentiation efficiencies when compared with traditional manual processes. Interestingly, eliminating exposure to ambient conditions during the liquid-handling stage resulted in significant improvements in the yield of MAP2-positive neural cells, indicating that this level of control can improve differentiation processes. This article describes, for the first time, an automation platform capable of maintaining oxygen tension control during both the cell-culture and liquid-handling stages of a 2D embryonic stem cell differentiation process.
VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans
NASA Astrophysics Data System (ADS)
Wang, Song; Gupta, Chetan; Mehta, Abhay
There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-01
meraculous2 is a whole genome shotgun assembler for short-reads that is capable of assembling large, polymorphic genomes with modest computational requirements. Meraculous relies on an efficient and conservative traversal of the subgraph of the k-mer (deBruijn) graph of oligonucleotides with unique high quality extensions in the dataset, avoiding an explicit error correction step as used in other short-read assemblers. Additional features include (1) handling of allelic variation using "bubble" structures within the deBruijn graph, (2) gap closing of repetitive and low quality regions using localized assemblies, and (3) an improved scaffolding algorithm that produces more complete assemblies without compromising onmore » scaffolding accuracy« less
Overcoming limitations of model-based diagnostic reasoning systems
NASA Technical Reports Server (NTRS)
Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.
1989-01-01
The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.
The impact of image storage organization on the effectiveness of PACS.
Hindel, R
1990-11-01
Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.
Handling Qualities of Large Rotorcraft in Hover and Low Speed
NASA Technical Reports Server (NTRS)
Malpica, Carlos; Theodore, Colin R.; Lawrence , Ben; Blanken, Chris L.
2015-01-01
According to a number of system studies, large capacity advanced rotorcraft with a capability of high cruise speeds (approx.350 mph) as well as vertical and/or short take-off and landing (V/STOL) flight could alleviate anticipated air transportation capacity issues by making use of non-primary runways, taxiways, and aprons. These advanced aircraft pose a number of design challenges, as well as unknown issues in the flight control and handling qualities domains. A series of piloted simulation experiments have been conducted on the NASA Ames Research Center Vertical Motion Simulator (VMS) in recent years to systematically investigate the fundamental flight control and handling qualities issues associated with the characteristics of large rotorcraft, including tiltrotors, in hover and low-speed maneuvering.
Liu, Saifei; Newland, Richard F; Tully, Phillip J; Tuble, Sigrid C; Baker, Robert A
2011-09-01
The delivery of gaseous microemboli (GME) by the cardiopulmonary bypass circuit should be minimized whenever possible. Innovations in components, such as the integration of arterial line filter (ALF) and ALFs with reduced priming volumes, have provided clinicians with circuit design options. However, before adopting these components clinically, their GME handling ability should be assessed. This study aims to compare the GME handling ability of different oxygenator/ALF combinations with our currently utilized combination. Five commercially available oxygenator/ALF combinations were evaluated in vitro: Terumo Capiox SX25RX and Dideco D734 (SX/D734),Terumo Capiox RX25R and AF125 (RX/AF125), Terumo FX25R (FX), Sorin Synthesis with 102 microm reservoir filter (SYN102), and Sorin Synthesis with 40 microm reservoir filter (SYN40). GME handling was studied by introducing air into the venous return at 100 mL/min for 60 seconds under two flow/ pressure combinations: 3.5 L/min, 150 mmHg and 5 L/min, 200 mmHg. Emboli were measured at three positions in the circuit using the Emboli Detection and Classification (EDAC) Quantifier and analyzed with the General Linear Model. All circuits significantly reduced GME. The SX/D734 and SYN40 circuits were most efficient in GME removal whilst the SYN102 handled embolic load (count and volume) least efficiently (p < .001). A greater number of emboli <70 microm were observed for the SYN102, FX and RX/AF125 circuits (p < .001). An increase in embolic load occurred with higher flow/pressure in all circuits (p < .001). The venous reservoir significantly influences embolic load delivered to the oxygenator (p < .001). The majority of introduced venous air was removed; however, significant variation existed in the ability of the different circuits to handle GME. Venous reservoir design influenced the overall GME handling ability. GME removal was less efficient at higher flow and pressure, and for smaller sized emboli. The clinical significance of reducing GME requires further investigation.
NASA Astrophysics Data System (ADS)
Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi
2013-02-01
A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.
Algorithms for elasto-plastic-creep postbuckling
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1984-01-01
This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.
Classified and clustered data constellation: An efficient approach of 3D urban data management
NASA Astrophysics Data System (ADS)
Azri, Suhaibah; Ujang, Uznir; Castro, Francesc Antón; Rahman, Alias Abdul; Mioc, Darka
2016-03-01
The growth of urban areas has resulted in massive urban datasets and difficulties handling and managing issues related to urban areas. Huge and massive datasets can degrade data retrieval and information analysis performance. In addition, the urban environment is very difficult to manage because it involves various types of data, such as multiple types of zoning themes in the case of urban mixed-use development. Thus, a special technique for efficient handling and management of urban data is necessary. This paper proposes a structure called Classified and Clustered Data Constellation (CCDC) for urban data management. CCDC operates on the basis of two filters: classification and clustering. To boost up the performance of information retrieval, CCDC offers a minimal percentage of overlap among nodes and coverage area to avoid repetitive data entry and multipath query. The results of tests conducted on several urban mixed-use development datasets using CCDC verify that it efficiently retrieves their semantic and spatial information. Further, comparisons conducted between CCDC and existing clustering and data constellation techniques, from the aspect of preservation of minimal overlap and coverage, confirm that the proposed structure is capable of preserving the minimum overlap and coverage area among nodes. Our overall results indicate that CCDC is efficient in handling and managing urban data, especially urban mixed-use development applications.
Lin, Chin Jung; Yang, Wen-Ta; Chou, Chen-Yi; Liou, Sofia Ya Hsuan
2016-06-01
Hollow core-shell mesoporous TiO2 microspheres were synthesized by a template-free solvothermal route for efficient photocatalytic degradation of acetaminophen. X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and Barrett-Joyner-Halenda data revealed a micrometer-sized mesoporous anatase TiO2 hollow sphere with large surface area and efficient light harvesting. For the photocatalytic degradation of acetaminophen in 60 min, the conversion fraction of the drug increased from 88% over commercial Degussa P25 TiO2 to 94% over hollow spheres with about 25% increase in the initial reaction rate. Even after 10 repeated runs, the recycled hollow spheres showed good photodegradation activity. The intermediates generated in the photocatalytic reactions were eventually converted into molecules that are easier to handle. The simple fabrication route would facilitate the development of photocatalysts for the decomposition of environmental contaminants. Copyright © 2016 Elsevier Ltd. All rights reserved.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Regional agriculture surveys using ERTS-1 data
NASA Technical Reports Server (NTRS)
Draeger, W. C.; Nichols, J. D.; Benson, A. S.; Larrabee, D. G.; Jenkus, W. M.; Hay, C. M.
1974-01-01
The Center for Remote Sensing Research has conducted studies designed to evaluate the potential application of ERTS data in performing agricultural inventories, and to develop efficient methods of data handling and analysis useful in the operational context for performing large area surveys. This work has resulted in the development of an integrated system utilizing both human and computer analysis of ground, aerial, and space imagery, which has been shown to be very efficient for regional crop acreage inventories. The technique involves: (1) the delineation of ERTS images into relatively homogeneous strata by human interpreters, (2) the point-by-point classification of the area within each strata on the basis of crop type using a human/machine interactive digital image processing system; and (3) a multistage sampling procedure for the collection of supporting aerial and ground data used in the adjustment and verification of the classification results.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Gradia, Scott D; Ishida, Justin P; Tsai, Miaw-Sheue; Jeans, Chris; Tainer, John A; Fuss, Jill O
2017-01-01
Recombinant expression of large, multiprotein complexes is essential and often rate limiting for determining structural, biophysical, and biochemical properties of DNA repair, replication, transcription, and other key cellular processes. Baculovirus-infected insect cell expression systems are especially well suited for producing large, human proteins recombinantly, and multigene baculovirus systems have facilitated studies of multiprotein complexes. In this chapter, we describe a multigene baculovirus system called MacroBac that uses a Biobricks-type assembly method based on restriction and ligation (Series 11) or ligation-independent cloning (Series 438). MacroBac cloning and assembly is efficient and equally well suited for either single subcloning reactions or high-throughput cloning using 96-well plates and liquid handling robotics. MacroBac vectors are polypromoter with each gene flanked by a strong polyhedrin promoter and an SV40 poly(A) termination signal that minimize gene order expression level effects seen in many polycistronic assemblies. Large assemblies are robustly achievable, and we have successfully assembled as many as 10 genes into a single MacroBac vector. Importantly, we have observed significant increases in expression levels and quality of large, multiprotein complexes using a single, multigene, polypromoter virus rather than coinfection with multiple, single-gene viruses. Given the importance of characterizing functional complexes, we believe that MacroBac provides a critical enabling technology that may change the way that structural, biophysical, and biochemical research is done. © 2017 Elsevier Inc. All rights reserved.
Detailed performance analysis of the A.A.D. - concept B
NASA Technical Reports Server (NTRS)
Sekar, R.; Tozzi, L.
1983-01-01
New concepts for engine performance improvement are seen through the adoption of heat regeneration techniques; advanced methods to enhance the combustion; and higher efficiency air handling machinery, such as the positive displacement helical screw expander and compressor. Each of these concepts plays a particular role in engine performance improvement. First regeneration has a great potential for achieving higher engine thermal efficiency through the recovery of waste energy. Although the concept itself is not new (this technique is used in the gas turbine), the application to reciprocating internal combustion engines is quite unusual and presents conceptual difficulties. The second important area is better control of the combustion process in terms of heat transfer characteristics, combustion products, and heat release rate. The third area for performance improvement is in the adoption of high efficiency air handling machinery. In particular, positive displacement helical expander and compressor exhibit an extremely high efficiency over a wide range of operating conditions.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-28
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA-2011-0057] Social Security Ruling 11-1p; Titles II... AGENCY: Social Security Administration. ACTION: Notice of Social Security Ruling (SSR) SUMMARY: We are... administrative review process. This change will allow us to more efficiently use our limited resources to handle...
NASA Astrophysics Data System (ADS)
Gross-Camp, Nicole D.; Kaplin, Beth A.
2011-11-01
We examined the influence of seed handling by two semi-terrestrial African forest primates, chimpanzees ( Pan troglodytes) and l'Hoest's monkeys ( Cercopithecus lhoesti), on the fate of large-seeded tree species in an afromontane forest. Chimpanzees and l'Hoest's monkeys dispersed eleven seed species over one year, with quantity and quality of dispersal varying through time. Primates differed in their seed handling behaviors with chimpanzees defecating large seeds (>0.5 cm) significantly more than l'Hoest's. Furthermore, they exhibited different oral-processing techniques with chimpanzees discarding wadges containing many seeds and l'Hoest's monkeys spitting single seeds. A PCA examined the relationship between microhabitat characteristics and the site where primates deposited seeds. The first two components explained almost half of the observed variation. Microhabitat characteristics associated with sites where seeds were defecated had little overlap with those characteristics describing where spit seeds arrived, suggesting that seed handling in part determines the location where seeds are deposited. We monitored a total of 552 seed depositions through time, recording seed persistence, germination, and establishment. Defecations were deposited significantly farther from an adult conspecific than orally-discarded seeds where they experienced the greatest persistence but poorest establishment. In contrast, spit seeds were deposited closest to an adult conspecific but experienced the highest seed establishment rates. We used experimental plots to examine the relationship between seed handling, deposition site, and seed fate. We found a significant difference in seed handling and fate, with undispersed seeds in whole fruits experiencing the lowest establishment rates. Seed germination differed by habitat type with open forest experiencing the highest rates of germination. Our results highlight the relationship between primate seed handling and deposition site and seed fate, and may be helpful in developing models to predict seed shadows and recruitment patterns of large-seeded trees.
Ride quality sensitivity to SAS control law and to handling quality variations
NASA Technical Reports Server (NTRS)
Roberts, P. A.; Schmidt, D. K.; Swaim, R. L.
1976-01-01
The RQ trends which large flexible aircraft exhibit under various parameterizations of control laws and handling qualities are discussed. A summary of the assumptions and solution technique, a control law parameterization review, a discussion of ride sensitivity to handling qualities, and the RQ effects generated by implementing relaxed static stability configurations are included.
Christodoulou, Nikolaos A; Tousert, Nikolaos E; Georgiadi, Eleni Ch; Argyri, Katerina D; Misichroni, Fay D; Stamatakos, Georgios S
2016-01-01
The plethora of available disease prediction models and the ongoing process of their application into clinical practice - following their clinical validation - have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features.
Chapter 21: Programmatic Interfaces - STILTS
NASA Astrophysics Data System (ADS)
Fitzpatrick, M. J.
STILTS is the Starlink Tables Infrastructure Library Tool Set developed by Mark Taylor of the former Starlink Project. STILTS is a command-line tool (see the NVOSS_HOME/bin/stilts command) providing access to the same functionality driving the TOPCAT application and can be run using either the STILTS-specific jar file, or the more general TOPCAT jar file (both are available in the NVOSS_HOME/java/lib directory and are included in the default software environment classpath). The heart of both STILTS and TOPCAT is the STIL Java library. STIL is designed to efficiently handle the input, output and processing of very large tabular datasets and the STILTS task interface makes it an ideal tool for the scripting environment. Multiple formats are supported (including FITS Binary Tables, VOTable, CSV, SQL databases and ASCII, amongst others) and while some tools will generically handle all supported formats, others are specific to the VOTable format. Converting a VOTable to a more script-friendly format is the first thing most users will encounter, but there are many other useful tools as well.
Christodoulou, Nikolaos A.; Tousert, Nikolaos E.; Georgiadi, Eleni Ch.; Argyri, Katerina D.; Misichroni, Fay D.; Stamatakos, Georgios S.
2016-01-01
The plethora of available disease prediction models and the ongoing process of their application into clinical practice – following their clinical validation – have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features. PMID:27812280
Research of GIS-services applicability for solution of spatial analysis tasks.
NASA Astrophysics Data System (ADS)
Terekhin, D. A.; Botygin, I. A.; Sherstneva, A. I.; Sherstnev, V. S.
2017-01-01
Experiments for working out the areas of applying various gis-services in the tasks of spatial analysis are discussed in this paper. Google Maps, Yandex Maps, Microsoft SQL Server are used as services of spatial analysis. All services have shown a comparable speed of analyzing the spatial data when carrying out elemental spatial requests (building up the buffer zone of a point object) as well as the preferences of Microsoft SQL Server in operating with more complicated spatial requests. When building up elemental spatial requests, internet-services show higher efficiency due to cliental data handling with JavaScript-subprograms. A weak point of public internet-services is an impossibility to handle data on a server side and a barren variety of spatial analysis functions. Microsoft SQL Server offers a large variety of functions needed for spatial analysis on the server side. The authors conclude that when solving practical problems, the capabilities of internet-services used in building up routes and completing other functions with spatial analysis with Microsoft SQL Server should be involved.
An Incremental High-Utility Mining Algorithm with Transaction Insertion
Gan, Wensheng; Zhang, Binbin
2015-01-01
Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Adalsteinsson, David; McMillen, David; Elston, Timothy C
2004-03-08
Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Li, J L; Deng, H; Lai, D B; Xu, F; Chen, J; Gao, G; Recker, R R; Deng, H W
2001-07-01
To efficiently manipulate large amounts of genotype data generated with fluorescently labeled dinucleotide markers, we developed a Microsoft database management system, named. offers several advantages. First, it accommodates the dynamic nature of the accumulations of genotype data during the genotyping process; some data need to be confirmed or replaced by repeat lab procedures. By using, the raw genotype data can be imported easily and continuously and incorporated into the database during the genotyping process that may continue over an extended period of time in large projects. Second, almost all of the procedures are automatic, including autocomparison of the raw data read by different technicians from the same gel, autoadjustment among the allele fragment-size data from cross-runs or cross-platforms, autobinning of alleles, and autocompilation of genotype data for suitable programs to perform inheritance check in pedigrees. Third, provides functions to track electrophoresis gel files to locate gel or sample sources for any resultant genotype data, which is extremely helpful for double-checking consistency of raw and final data and for directing repeat experiments. In addition, the user-friendly graphic interface of renders processing of large amounts of data much less labor-intensive. Furthermore, has built-in mechanisms to detect some genotyping errors and to assess the quality of genotype data that then are summarized in the statistic reports automatically generated by. The can easily handle >500,000 genotype data entries, a number more than sufficient for typical whole-genome linkage studies. The modules and programs we developed for the can be extended to other database platforms, such as Microsoft SQL server, if the capability to handle still greater quantities of genotype data simultaneously is desired.
Fleet Sizing of Automated Material Handling Using Simulation Approach
NASA Astrophysics Data System (ADS)
Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny
2018-03-01
Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software
Computational modeling of magnetic particle margination within blood flow through LAMMPS
NASA Astrophysics Data System (ADS)
Ye, Huilin; Shen, Zhiqiang; Li, Ying
2017-11-01
We develop a multiscale and multiphysics computational method to investigate the transport of magnetic particles as drug carriers in blood flow under influence of hydrodynamic interaction and external magnetic field. A hybrid coupling method is proposed to handle red blood cell (RBC)-fluid interface (CFI) and magnetic particle-fluid interface (PFI), respectively. Immersed boundary method (IBM)-based velocity coupling is used to account for CFI, which is validated by tank-treading and tumbling behaviors of a single RBC in simple shear flow. While PFI is captured by IBM-based force coupling, which is verified through movement of a single magnetic particle under non-uniform external magnetic field and breakup of a magnetic chain in rotating magnetic field. These two components are seamlessly integrated within the LAMMPS framework, which is a highly parallelized molecular dynamics solver. In addition, we also implement a parallelized lattice Boltzmann simulator within LAMMPS to handle the fluid flow simulation. Based on the proposed method, we explore the margination behaviors of magnetic particles and magnetic chains within blood flow. We find that the external magnetic field can be used to guide the motion of these magnetic materials and promote their margination to the vascular wall region. Moreover, the scaling performance and speedup test further confirm the high efficiency and robustness of proposed computational method. Therefore, it provides an efficient way to simulate the transport of nanoparticle-based drug carriers within blood flow in a large scale. The simulation results can be applied in the design of efficient drug delivery vehicles that optimally accumulate within diseased tissue, thus providing better imaging sensitivity, therapeutic efficacy and lower toxicity.
GRASS GIS: The first Open Source Temporal GIS
NASA Astrophysics Data System (ADS)
Gebbert, Sören; Leppelt, Thomas
2015-04-01
GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management over temporal aggregation, temporal accumulation, spatio-temporal statistics, spatio-temporal sampling, temporal algebra, temporal topology analysis, time series animation and temporal topology visualization to time series import and export capabilities with support for NetCDF and VTK data formats. We will present several temporal modules that support parallel processing of raster and 3D raster time series. [1] GRASS GIS Open Source Approaches in Spatial Data Handling In Open Source Approaches in Spatial Data Handling, Vol. 2 (2008), pp. 171-199, doi:10.1007/978-3-540-74831-19 by M. Neteler, D. Beaudette, P. Cavallini, L. Lami, J. Cepicky edited by G. Brent Hall, Michael G. Leahy [2] Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environ. Model. Softw. 53, 1-12. [3] Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS Intl Journal of Geo-Information 2, 201-219. [4] Löwe, P., Klump, J., Thaler, J. (2012): The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster, (Geophysical Research Abstracts Vol. 14, EGU2012-4491, 2012), General Assembly European Geosciences Union (Vienna, Austria 2012). [5] Akhter, S., Aida, K., Chemin, Y., 2010. "GRASS GIS on High Performance Computing with MPI, OpenMP and Ninf-G Programming Framework". ISPRS Conference, Kyoto, 9-12 August 2010
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Efficiency in pathology laboratories: a survey of operations management in NHS bacteriology.
Szczepura, A K
1991-01-01
In recent years pathology laboratory services in the U.K. have experienced large increases in demand. But the extent to which U.K. laboratories have introduced controls to limit unnecessary procedures within the laboratory was previously unclear. This paper presents the results of a survey of all 343 NHS bacteriology laboratories which records the extent to which such operations management controls are now in place. The survey shows large differences between laboratories. Quality controls over inputs, the use of screening tests as a culture substitute, the use of direct susceptibility testing, controls over routine antibiotic susceptibility testing, and controls over reporting of results all vary widely. The survey also records the prevalence of hospital antibiotic policies, the extent to which laboratories produce antibiograms for user clinicians, the degree of computerisation in data handling, and the degree of automation in processing specimens. Finally, the survey uncovers a large variation between NHS labs in the percentage of bacteriology samples which prove positive and lead to antibiotic susceptibility tests being carried out.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities
2011-01-01
Background Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. Results The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. Conclusions PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/. PMID:21385349
Recent developments in user-job management with Ganga
NASA Astrophysics Data System (ADS)
Currie, R.; Elmsheuser, J.; Fay, R.; Owen, P. H.; Richards, A.; Slater, M.; Sutcliffe, W.; Williams, M.
2015-12-01
The Ganga project was originally developed for use by LHC experiments and has been used extensively throughout Run1 in both LHCb and ATLAS. This document describes some the most recent developments within the Ganga project. There have been improvements in the handling of large scale computational tasks in the form of a new GangaTasks infrastructure. Improvements in file handling through using a new IGangaFile interface makes handling files largely transparent to the end user. In addition to this the performance and usability of Ganga have both been addressed through the development of a new queues system allows for parallel processing of job related tasks.
An Investigation of Large Tilt-Rotor Hover and Low Speed Handling Qualities
NASA Technical Reports Server (NTRS)
Malpica, Carlos A.; Decker, William A.; Theodore, Colin R.; Lindsey, James E.; Lawrence, Ben; Blanken, Chris L.
2011-01-01
A piloted simulation experiment conducted on the NASA-Ames Vertical Motion Simulator evaluated the hover and low speed handling qualities of a large tilt-rotor concept, with particular emphasis on longitudinal and lateral position control. Ten experimental test pilots evaluated different combinations of Attitude Command-Attitude Hold (ACAH) and Translational Rate Command (TRC) response types, nacelle conversion actuator authority limits and inceptor choices. Pilots performed evaluations in revised versions of the ADS-33 Hover, Lateral Reposition and Depart/Abort MTEs and moderate turbulence conditions. Level 2 handling qualities ratings were primarily recorded using ACAH response type in all three of the evaluation maneuvers. The baseline TRC conferred Level 1 handling qualities in the Hover MTE, but there was a tendency to enter into a PIO associated with nacelle actuator rate limiting when employing large, aggressive control inputs. Interestingly, increasing rate limits also led to a reduction in the handling qualities ratings. This led to the identification of a nacelle rate to rotor longitudinal flapping coupling effect that induced undesired, pitching motions proportional to the allowable amount of nacelle rate. A modification that counteracted this effect significantly improved the handling qualities. Evaluation of the different response type variants showed that inclusion of TRC response could provide Level 1 handling qualities in the Lateral Reposition maneuver by reducing coupled pitch and heave off axis responses that otherwise manifest with ACAH. Finally, evaluations in the Depart/Abort maneuver showed that uncertainty about commanded nacelle position and ensuing aircraft response, when manually controlling the nacelle, demanded high levels of attention from the pilot. Additional requirements to maintain pitch attitude within 5 deg compounded the necessary workload.
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
Forage Handling. An Instructional Unit for Teachers of Adult Vocational Education in Agriculture.
ERIC Educational Resources Information Center
Greer, Jerry W.; Iverson, Maynard J.
The unit of instruction is designed for use by teachers in planning and conducting young farmer and adult farmer classes. The purpose of this course is to develop the effective ability of farmers to efficiently handle forages for economic livestock feed on Kentucky farms. The unit is divided into five lessons. The lessons deal with the following…
Sivakumar, B; Bhalaji, N; Sivakumar, D
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing.
Sivakumar, B.; Bhalaji, N.; Sivakumar, D.
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing. PMID:24790546
The Challenges of Using Horses for Practical Teaching Purposes in Veterinary Programmes
Gronqvist, Gabriella; Rogers, Chris; Gee, Erica; Bolwell, Charlotte; Gordon, Stuart
2016-01-01
Simple Summary Veterinary students often lack previous experience in handling horses and other large animals. This article discusses the challenges of using horses for veterinary teaching purposes and the potential consequences to student and equine welfare. The article proposes a conceptual model to optimise equine welfare, and subsequently student safety, during practical equine handling classes. Abstract Students enrolled in veterinary degrees often come from an urban background with little previous experience in handling horses and other large animals. Many veterinary degree programmes place importance on the teaching of appropriate equine handling skills, yet within the literature it is commonly reported that time allocated for practical classes often suffers due to time constraint pressure from other elements of the curriculum. The effect of this pressure on animal handling teaching time is reflected in the self-reported low level of animal handling competency, particularly equine, in students with limited prior experience with horses. This is a concern as a naive student is potentially at higher risk of injury to themselves when interacting with horses. Additionally, a naive student with limited understanding of equine behaviour may, through inconsistent or improper handling, increase the anxiety and compromise the welfare of these horses. There is a lack of literature investigating the welfare of horses in university teaching facilities, appropriate handling procedures, and student safety. This article focuses on the importance for students to be able to interpret equine behaviour and the potential consequences of poor handling skills to equine and student welfare. Lastly, the authors suggest a conceptual model to optimise equine welfare, and subsequently student safety, during practical equine handling classes. PMID:27845702
Development of longitudinal handling qualities criteria for large advanced supersonic aircraft
NASA Technical Reports Server (NTRS)
Sudderth, R. W.; Bohn, J. G.; Caniff, M. A.; Bennett, G. R.
1975-01-01
Longitudinal handling qualities criteria in terms of airplane response characteristics were developed. The criteria cover high speed cruise maneuvering, landing approach, and stall recovery. Data substantiating the study results are reported.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
Interactive exploration of coastal restoration modeling in virtual environments
NASA Astrophysics Data System (ADS)
Gerndt, Andreas; Miller, Robert; Su, Simon; Meselhe, Ehab; Cruz-Neira, Carolina
2009-02-01
Over the last decades, Louisiana has lost a substantial part of its coastal region to the Gulf of Mexico. The goal of the project depicted in this paper is to investigate the complex ecological and geophysical system not only to find solutions to reverse this development but also to protect the southern landscape of Louisiana for disastrous impacts of natural hazards like hurricanes. This paper sets a focus on the interactive data handling of the Chenier Plain which is only one scenario of the overall project. The challenge addressed is the interactive exploration of large-scale time-depending 2D simulation results and of terrain data with a high resolution that is available for this region. Besides data preparation, efficient visualization approaches optimized for the usage in virtual environments are presented. These are embedded in a complex framework for scientific visualization of time-dependent large-scale datasets. To provide a straightforward interface for rapid application development, a software layer called VRFlowVis has been developed. Several architectural aspects to encapsulate complex virtual reality aspects like multi-pipe vs. cluster-based rendering are discussed. Moreover, the distributed post-processing architecture is investigated to prove its efficiency for the geophysical domain. Runtime measurements conclude this paper.
Efficient processing of fluorescence images using directional multiscale representations.
Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M
2014-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.
An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From
NASA Astrophysics Data System (ADS)
Wang, X.; Zhan, Z. Q.; Heipke, C.
2017-05-01
Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.
Efficient processing of fluorescence images using directional multiscale representations
Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.
2017-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
BatMis: a fast algorithm for k-mismatch mapping.
Tennakoon, Chandana; Purbojati, Rikky W; Sung, Wing-Kin
2012-08-15
Second-generation sequencing (SGS) generates millions of reads that need to be aligned to a reference genome allowing errors. Although current aligners can efficiently map reads allowing a small number of mismatches, they are not well suited for handling a large number of mismatches. The efficiency of aligners can be improved using various heuristics, but the sensitivity and accuracy of the alignments are sacrificed. In this article, we introduce Basic Alignment tool for Mismatches (BatMis)--an efficient method to align short reads to a reference allowing k mismatches. BatMis is a Burrows-Wheeler transformation based aligner that uses a seed and extend approach, and it is an exact method. Benchmark tests show that BatMis performs better than competing aligners in solving the k-mismatch problem. Furthermore, it can compete favorably even when compared with the heuristic modes of the other aligners. BatMis is a useful alternative for applications where fast k-mismatch mappings, unique mappings or multiple mappings of SGS data are required. BatMis is written in C/C++ and is freely available from http://code.google.com/p/batmis/
NASA Astrophysics Data System (ADS)
Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon
2017-09-01
Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
Relations between basic and specific motor abilities and player quality of young basketball players.
Marić, Kristijan; Katić, Ratko; Jelicić, Mario
2013-05-01
Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.
NASA Technical Reports Server (NTRS)
Padovan, J.; Lackney, J.
1986-01-01
The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
Gold nanoparticles: enhanced optical trapping and sensitivity coupled with significant heating.
Seol, Yeonee; Carpenter, Amanda E; Perkins, Thomas T
2006-08-15
Gold nanoparticles appear to be superior handles in optical trapping assays. We demonstrate that relatively large gold particles (R(b)=50 nm) indeed yield a sixfold enhancement in trapping efficiency and detection sensitivity as compared to similar-sized polystyrene particles. However, optical absorption by gold at the most common trapping wavelength (1064 nm) induces dramatic heating (266 degrees C/W). We determined this heating by comparing trap stiffness from three different methods in conjunction with detailed modeling. Due to this heating, gold nanoparticles are not useful for temperature-sensitive optical-trapping experiments, but may serve as local molecular heaters. Also, such particles, with their increased detection sensitivity, make excellent probes for certain zero-force biophysical assays.
GAPIT: genome association and prediction integrated tool.
Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu
2012-09-15
Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Sisi; Nicely, Lucas D; Zhang, Haibin
Modern large-scale networks require the ability to withstand arbitrary failures (i.e., Byzantine failures). Byzantine reliable broadcast algorithms can be used to reliably disseminate information in the presence of Byzantine failures. We design a novel Byzantine reliable broadcast protocol for loosely connected and synchronous networks. While previous such protocols all assume correct senders, our protocol is the first to handle Byzantine senders. To achieve this goal, we have developed new techniques for fault detection and fault tolerance. Our protocol is efficient, and under normal circumstances, no expensive public-key cryptographic operations are used. We implement and evaluate our protocol, demonstrating that ourmore » protocol has high throughput and is superior to the existing protocols in uncivil executions.« less
NASA Astrophysics Data System (ADS)
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
2018-02-01
In order to meet the growing mobility requirements for the wheeled vehicles on all types of terrain the engineers have to develop a large number of specialized control algorithms for the multi-axle wheeled vehicle (MWV) suspension improving such qualities as ride comfort, handling and stability. The authors have developed an adaptive algorithm of the dynamic damping of the MVW body oscillations. The algorithm provides high ride comfort and high mobility of the vehicle. The article discloses a method for synthesis of an adaptive dynamic continuous algorithm of the MVW body oscillation damping and provides simulation results proving high efficiency of the developed control algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Scanning holographic optical tweezers.
Shaw, L A; Panas, Robert M; Spadaccini, C M; Hopkins, J B
2017-08-01
The aim of this Letter is to introduce a new optical tweezers approach, called scanning holographic optical tweezers (SHOT), which drastically increases the working area (WA) of the holographic-optical tweezers (HOT) approach, while maintaining tightly focused laser traps. A 12-fold increase in the WA is demonstrated. The SHOT approach achieves its utility by combining the large WA of the scanning optical tweezers (SOT) approach with the flexibility of the HOT approach for simultaneously moving differently structured optical traps in and out of the focal plane. This Letter also demonstrates a new heuristic control algorithm for combining the functionality of the SOT and HOT approaches to efficiently allocate the available laser power among a large number of traps. The proposed approach shows promise for substantially increasing the number of particles that can be handled simultaneously, which would enable optical tweezers additive fabrication technologies to rapidly assemble microgranular materials and structures in reasonable build times.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle
2016-01-01
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757
Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung
2009-01-01
Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.
Redefining NHS complaint handling--the real challenge.
Seelos, L; Adamson, C
1994-01-01
More and more organizations find that a constructive and open dialogue with their customers can be an effective strategy for building long-term customer relations. In this context, it has been recognized that effective complaint-contact handling can make a significant contribution to organizations' attempts to maximize customer satisfaction and loyalty. Within the NHS, an intellectual awareness exists that effective complaint/contact handling can contribute to making services more efficient and cost-effective by developing customer-oriented improvement initiatives. Recent efforts have focused on redefining NHS complaint-handling procedures to make them more user-friendly and effective for both NHS employees and customers. Discusses the challenges associated with opening up the NHS to customer feedback. Highlights potential weaknesses in the current approach and argues that the real challenge is for NHS managers to facilitate a culture change that moves the NHS away from a long-established defensive complaint handling practice.
Efficient and Scalable Cross-Matching of (Very) Large Catalogs
NASA Astrophysics Data System (ADS)
Pineau, F.-X.; Boch, T.; Derriere, S.
2011-07-01
Whether it be for building multi-wavelength datasets from independent surveys, studying changes in objects luminosities, or detecting moving objects (stellar proper motions, asteroids), cross-catalog matching is a technique widely used in astronomy. The need for efficient, reliable and scalable cross-catalog matching is becoming even more pressing with forthcoming projects which will produce huge catalogs in which astronomers will dig for rare objects, perform statistical analysis and classification, or real-time transients detection. We have developed a formalism and the corresponding technical framework to address the challenge of fast cross-catalog matching. Our formalism supports more than simple nearest-neighbor search, and handles elliptical positional errors. Scalability is improved by partitioning the sky using the HEALPix scheme, and processing independently each sky cell. The use of multi-threaded two-dimensional kd-trees adapted to managing equatorial coordinates enables efficient neighbor search. The whole process can run on a single computer, but could also use clusters of machines to cross-match future very large surveys such as GAIA or LSST in reasonable times. We already achieve performances where the 2MASS (˜470M sources) and SDSS DR7 (˜350M sources) can be matched on a single machine in less than 10 minutes. We aim at providing astronomers with a catalog cross-matching service, available on-line and leveraging on the catalogs present in the VizieR database. This service will allow users both to access pre-computed cross-matches across some very large catalogs, and to run customized cross-matching operations. It will also support VO protocols for synchronous or asynchronous queries.
NASA Technical Reports Server (NTRS)
Malcipa, Carlos; Decker, William A.; Theodore, Colin R.; Blanken, Christopher L.; Berger, Tom
2010-01-01
A piloted simulation investigation was conducted using the NASA Ames Vertical Motion Simulator to study the impact of pitch, roll and yaw attitude bandwidth and phase delay on handling qualities of large tilt-rotor aircraft. Multiple bandwidth and phase delay pairs were investigated for each axis. The simulation also investigated the effect that the pilot offset from the center of gravity has on handling qualities. While pilot offset does not change the dynamics of the vehicle, it does affect the proprioceptive and visual cues and it can have an impact on handling qualities. The experiment concentrated on two primary evaluation tasks: a precision hover task and a simple hover pedal turn. Six pilots flew over 1400 data runs with evaluation comments and objective performance data recorded. The paper will describe the experiment design and methodology, discuss the results of the experiment and summarize the findings.
2014-09-01
bend of ninety degrees and the application toward waste heat recovery devices. CFD models were implemented in ANSYS / CFX to handle flow in both...devices. CFD models were implemented in ANSYS / CFX to handle flow in both laminar and turbulent regimes. Applying the principles from the Reynolds... ANSYS / CFX SET-UP ....................................................................................23 C. EVALUATION OF VALIDATION RESULTS
Development of a high efficiency thin silicon solar cell
NASA Technical Reports Server (NTRS)
Lindmayer, J.; Wrigley, C. Y.
1977-01-01
A key to the success of this program was the breakthrough development of a technology for producing ultra-thin silicon slices which are very flexible, resilient, and tolerant of moderate handling abuse. Experimental topics investigated were thinning technology, gaseous junction diffusion, aluminum back alloying, internal reflectance, tantalum oxide anti-reflective coating optimization, slice flexibility, handling techniques, production rate limiting steps, low temperature behavior, and radiation tolerance.
Anderson, Eric C
2012-11-08
Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.
HAND TRUCK FOR HANDLING EQUIPMENT
King, D.W.
1959-02-24
A truck is described for the handling of large and relatively heavy pieces of equipment and particularly for the handling of ion source units for use in calutrons. The truck includes a chassis and a frame pivoted to the chassis so as to be operable to swing in the manner of a boom. The frame has spaced members so arranged that the device to be handled can be suspended between or passed between these spaced members and also rotated with respect to the frame when the device is secured to the spaced members.
Extended Importance Sampling for Reliability Analysis under Evidence Theory
NASA Astrophysics Data System (ADS)
Yuan, X. K.; Chen, B.; Zhang, B. Q.
2018-05-01
In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.
Integrated Power Adapter: Isolated Converter with Integrated Passives and Low Material Stress
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-09-01
ADEPT Project: CPES at Virginia Tech is developing an extremely efficient power converter that could be used in power adapters for small, lightweight laptops and other types of mobile electronic devices. Power adapters convert electrical energy into useable power for an electronic device, and they currently waste a lot of energy when they are plugged into an outlet to power up. CPES at Virginia Tech is integrating high-density capacitors, new magnetic materials, high-frequency integrated circuits, and a constant-flux transformer to create its efficient power converter. The high-density capacitors enable the power adapter to store more energy. The new magnetic materialsmore » also increase energy storage, and they can be precisely dispensed using a low-cost ink-jet printer which keeps costs down. The high-frequency integrated circuits can handle more power, and they can handle it more efficiently. And, the constant-flux transformer processes a consistent flow of electrical current, which makes the converter more efficient.« less
Reuse potential of low-calcium bottom ash as aggregate through pelletization.
Geetha, S; Ramamurthy, K
2010-01-01
Coal combustion residues which include fly ash, bottom ash and boiler slag is one of the major pollutants as these residues require large land area for their disposal. Among these residues, utilization of bottom ash in the construction industry is very low. This paper explains the use of bottom ash through pelletization. Raw bottom ash could not be pelletized as such due to its coarseness. Though pulverized bottom ash could be pelletized, the pelletization efficiency was low, and the aggregates were too weak to withstand the handling stresses. To improve the pelletization efficiency, different clay and cementitious binders were used with bottom ash. The influence of different factors and their interaction effects were studied on the duration of pelletization process and the pelletization efficiency through fractional factorial design. Addition of binders facilitated conversion of low-calcium bottom ash into aggregates. To achieve maximum pelletization efficiency, the binder content and moisture requirements vary with type of binder. Addition of Ca(OH)(2) improved the (i) pelletization efficiency, (ii) reduced the duration of pelletization process from an average of 14-7 min, and (iii) reduced the binder dosage for a given pelletization efficiency. For aggregate with clay binders and cementitious binder, Ca(OH)(2) and binder dosage have significant effect in reducing the duration of pelletization process. 2010 Elsevier Ltd. All rights reserved.
Friele, R D; Reitsma, P M; de Jong, J D
2015-10-01
Patients who submit complaints about the healthcare they have received are often dissatisfied with the response to their complaints. This is usually attributed to the failure of physicians to respond adequately to what complainants want, e.g. an apology or an explanation. However, expectations of complaint handling among the public may colour how they evaluate the way their own complaint is handled. This descriptive study assesses expectations of complaint handling in healthcare among the public and physicians. Negative public expectations and the gap between these expectations and those of physicians may explain patients' dissatisfaction with complaints procedures. We held two surveys; one among physicians, using a panel of 3366 physicians (response rate 57 %, containing all kinds of physicians like GP's, medical specialist and physicians working in a nursing home) and one among the public, using the Dutch Healthcare Consumer Panel (n = 1422, response rate 68 %). We asked both panels identical questions about their expectations of how complaints are handled in healthcare. Differences in expectation scores between the public and the physicians were tested using non-parametric tests. The public have negative expectations about how complaints are handled. Physician's expectations are far more positive, demonstrating large expectation gaps between physicians and the public. The large expectation gap between the public and physicians means that when they meet because of complaint, they are likely to start off with opposite expectations of the situation. This is no favourable condition for a positive outcome of a complaints procedure. The negative public preconceptions about the way their complaint will be handled will prove hard to change during the process of complaints handling. People tend to see what they thought would happen, almost inevitably leading to a negative judgement about how their complaint was handled.
An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce
2012-01-01
Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.
NASA Astrophysics Data System (ADS)
Tian, Fang-Bao; Dai, Hu; Luo, Haoxiang; Doyle, James F.; Rousseau, Bernard
2014-02-01
Three-dimensional fluid-structure interaction (FSI) involving large deformations of flexible bodies is common in biological systems, but accurate and efficient numerical approaches for modeling such systems are still scarce. In this work, we report a successful case of combining an existing immersed-boundary flow solver with a nonlinear finite-element solid-mechanics solver specifically for three-dimensional FSI simulations. This method represents a significant enhancement from the similar methods that are previously available. Based on the Cartesian grid, the viscous incompressible flow solver can handle boundaries of large displacements with simple mesh generation. The solid-mechanics solver has separate subroutines for analyzing general three-dimensional bodies and thin-walled structures composed of frames, membranes, and plates. Both geometric nonlinearity associated with large displacements and material nonlinearity associated with large strains are incorporated in the solver. The FSI is achieved through a strong coupling and partitioned approach. We perform several validation cases, and the results may be used to expand the currently limited database of FSI benchmark study. Finally, we demonstrate the versatility of the present method by applying it to the aerodynamics of elastic wings of insects and the flow-induced vocal fold vibration.
Tian, Fang-Bao; Dai, Hu; Luo, Haoxiang; Doyle, James F.; Rousseau, Bernard
2013-01-01
Three-dimensional fluid–structure interaction (FSI) involving large deformations of flexible bodies is common in biological systems, but accurate and efficient numerical approaches for modeling such systems are still scarce. In this work, we report a successful case of combining an existing immersed-boundary flow solver with a nonlinear finite-element solid-mechanics solver specifically for three-dimensional FSI simulations. This method represents a significant enhancement from the similar methods that are previously available. Based on the Cartesian grid, the viscous incompressible flow solver can handle boundaries of large displacements with simple mesh generation. The solid-mechanics solver has separate subroutines for analyzing general three-dimensional bodies and thin-walled structures composed of frames, membranes, and plates. Both geometric nonlinearity associated with large displacements and material nonlinearity associated with large strains are incorporated in the solver. The FSI is achieved through a strong coupling and partitioned approach. We perform several validation cases, and the results may be used to expand the currently limited database of FSI benchmark study. Finally, we demonstrate the versatility of the present method by applying it to the aerodynamics of elastic wings of insects and the flow-induced vocal fold vibration. PMID:24415796
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
Efficient accesses of data structures using processing near memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayasena, Nuwan S.; Zhang, Dong Ping; Diez, Paula Aguilera
Systems, apparatuses, and methods for implementing efficient queues and other data structures. A queue may be shared among multiple processors and/or threads without using explicit software atomic instructions to coordinate access to the queue. System software may allocate an atomic queue and corresponding queue metadata in system memory and return, to the requesting thread, a handle referencing the queue metadata. Any number of threads may utilize the handle for accessing the atomic queue. The logic for ensuring the atomicity of accesses to the atomic queue may reside in a management unit in the memory controller coupled to the memory wheremore » the atomic queue is allocated.« less
Data Envelopment Analysis and Its Application to the Measurement of Efficiency in Higher Education
ERIC Educational Resources Information Center
Johnes, Jill
2006-01-01
The purpose of this paper is to examine the possibility of measuring efficiency in the context of higher education. The paper begins by exploring the advantages and drawbacks of the various methods for measuring efficiency in the higher education context. The ease with which data envelopment analysis (DEA) can handle multiple inputs and multiple…
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
Fureix, Carole; Pagès, Magali; Bon, Richard; Lassalle, Jean-Michel; Kuntz, Philippe; Gonzalez, Georges
2009-10-01
Handling is a crucial component of the human-horse relationship. Here, we report data from an experiment conducted to assess and compare the effect of two training methods. Two groups of six Welsh mares were trained during four sessions of 50 min, one handled with traditional exercises (halter leading, grooming/brushing, lifting feet, lunging and pseudo-saddling (using only girth and saddle pad) and the second group with natural horsemanship exercises (desensitization, yielding to body pressure, lunging and free-lunging). Emotional reactivity (ER) and the human-horse relationship (HHR) were assessed both prior to and following handling. A social isolation test, a neophobia test and a bridge test were used to assess ER. HHR was assessed through test of spontaneous approach to, and forced approach by, an unknown human. Horses' ER decreased after both types of handling as indicated by decreases in the occurrence of whinnying during stressful situations. Head movement (jerk/shake) was the most sensitive variable to handling type. In the spontaneous approach tests, horses in the traditional handling group showed higher latencies to approach a motionless person after handling than did the natural horsemanship group. Our study suggests that natural horsemanship exercises could be more efficient than traditional exercises for improving horses' HHR.
Tickling, a Technique for Inducing Positive Affect When Handling Rats.
Cloutier, Sylvie; LaFollette, Megan R; Gaskill, Brianna N; Panksepp, Jaak; Newberry, Ruth C
2018-05-08
Handling small animals such as rats can lead to several adverse effects. These include the fear of humans, resistance to handling, increased injury risk for both the animals and the hands of their handlers, decreased animal welfare, and less valid research data. To minimize negative effects on experimental results and human-animal relationships, research animals are often habituated to being handled. However, the methods of habituation are highly variable and often of limited effectiveness. More potently, it is possible for humans to mimic aspects of the animals' playful rough-and-tumble behavior during handling. When applied to laboratory rats in a systematic manner, this playful handling, referred to as tickling, consistently gives rise to positive behavioral responses. This article provides a detailed description of a standardized rat tickling technique. This method can contribute to future investigations into positive affective states in animals, make it easier to handle rats for common husbandry activities such as cage changing or medical/research procedures such as injection, and be implemented as a source of social enrichment. It is concluded that this method can be used to efficiently and practicably reduce rats' fearfulness of humans and improve their welfare, as well as reliably model positive affective states.
Pondiki, S; Stamatakis, A; Fragkouli, A; Philippidis, H; Stylianopoulou, F
2006-10-13
Neonatal handling is an early experience which results in improved function of the hypothalamic-pituitary-adrenal axis, increased adaptability and coping as a response to stress, as well as better cognitive abilities. In the present study, we investigated the effect of neonatal handling on the basal forebrain cholinergic system, since this system is known to play an important role in cognitive processes. We report that neonatal handling results in increased number of choline-acetyl transferase immunopositive cells in the septum/diagonal band, in both sexes, while no such effect was observed in the other cholinergic nuclei, such as the magnocellular preoptic nucleus and the nucleus basalis of Meynert. In addition, neonatal handling resulted in increased M1 and M2 muscarinic receptor binding sites in the cingulate and piriform cortex of both male and female rats. A handling-induced increase in M1 muscarinic receptor binding sites was also observed in the CA3 and CA4 (fields 3 and 4 of Ammon's horn) areas of the hippocampus. Furthermore, a handling-induced increase in acetylcholinesterase staining was found only in the hippocampus of females. Our results thus show that neonatal handling acts in a sexually dimorphic manner on one of the cholinergic parameters, and has a beneficial effect on BFCS function, which could be related to the more efficient and adaptive stress response and the superior cognitive abilities of handled animals.
Solar heating system final design package
NASA Technical Reports Server (NTRS)
1979-01-01
The system is composed of a warm air collector, a logic control unit and a universal switching and transport unit. The collector was originally conceived and designed as an integrated roof/wall system and therefore provides a dual function in the structure. The collector serves both as a solar energy conversion system and as a structural weather resistant skin. The control unit provides totally automatic control over the operation of the system. It receives input data from sensor probes in collectors, storage and living space. The logic was designed so as to make maximum use of solar energy and minimize use of conventional energy. The transport and switching unit is a high-efficiency air-handling system equipped with gear motor valves that respond to outputs from the control system. The fan unit was designed for maximum durability and efficiency in operation, and has permanently lubricated ball bearings and excellent air-handling efficiency.
Minimizing Postsampling Degradation of Peptides by a Thermal Benchtop Tissue Stabilization Method
Segerström, Lova; Gustavsson, Jenny
2016-01-01
Enzymatic degradation is a major concern in peptide analysis. Postmortem metabolism in biological samples entails considerable risk for measurements misrepresentative of true in vivo concentrations. It is therefore vital to find reliable, reproducible, and easy-to-use procedures to inhibit enzymatic activity in fresh tissues before subjecting them to qualitative and quantitative analyses. The aim of this study was to test a benchtop thermal stabilization method to optimize measurement of endogenous opioids in brain tissue. Endogenous opioid peptides are generated from precursor proteins through multiple enzymatic steps that include conversion of one bioactive peptide to another, often with a different function. Ex vivo metabolism may, therefore, lead to erroneous functional interpretations. The efficacy of heat stabilization was systematically evaluated in a number of postmortem handling procedures. Dynorphin B (DYNB), Leu-enkephalin-Arg6 (LARG), and Met-enkephalin-Arg6-Phe7 (MEAP) were measured by radioimmunoassay in rat hypothalamus, striatum (STR), and cingulate cortex (CCX). Also, simplified extraction protocols for stabilized tissue were tested. Stabilization affected all peptide levels to varying degrees compared to those prepared by standard dissection and tissue handling procedures. Stabilization increased DYNB in hypothalamus, but not STR or CCX, whereas LARG generally decreased. MEAP increased in hypothalamus after all stabilization procedures, whereas for STR and CCX, the effect was dependent on the time point for stabilization. The efficacy of stabilization allowed samples to be left for 2 hours in room temperature (20°C) without changes in peptide levels. This study shows that conductive heat transfer is an easy-to-use and efficient procedure for the preservation of the molecular composition in biological samples. Region- and peptide-specific critical steps were identified and stabilization enabled the optimization of tissue handling and opioid peptide analysis. The result is improved diagnostic and research value of the samples with great benefits for basic research and clinical work. PMID:27007059
Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing
NASA Technical Reports Server (NTRS)
Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane
2012-01-01
Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.
NASA Astrophysics Data System (ADS)
Kittelmann, Jörg; Radtke, Carsten P.; Waldbaur, Ansgar; Neumann, Christiane; Hubbuch, Jürgen; Rapp, Bastian E.
2014-03-01
Since the early days microfluidics as a scientific discipline has been an interdisciplinary research field with a wide scope of potential applications. Besides tailored assays for point-of-care (PoC) diagnostics, microfluidics has been an important tool for large-scale screening of reagents and building blocks in organic chemistry, pharmaceutics and medical engineering. Furthermore, numerous potential marketable products have been described over the years. However, especially in industrial applications, microfluidics is often considered only an alternative technology for fluid handling, a field which is industrially mostly dominated by large-scale numerically controlled fluid and liquid handling stations. Numerous noteworthy products have dominated this field in the last decade and have been inhibited the widespread application of microfluidics technology. However, automated liquid handling stations and microfluidics do not have to be considered as mutually exclusive approached. We have recently introduced a hybrid fluidic platform combining an industrially established liquid handling station and a generic microfluidic interfacing module that allows probing a microfluidic system (such as an essay or a synthesis array) using the instrumentation provided by the liquid handling station. We term this technology "Microfluidic on Liquid Handling Stations (μF-on-LHS)" - a classical "best of both worlds"- approach that allows combining the highly evolved, automated and industry-proven LHS systems with any type of microfluidic assay. In this paper we show, to the best of our knowledge, the first droplet microfluidics application on an industrial LHS using the μF-on-LHS concept.
Green Propellant Landing Demonstration at U.S. Range
NASA Technical Reports Server (NTRS)
Mulkey, Henry W.; Miller, Joseph T.; Bacha, Caitlin E.
2016-01-01
The Green Propellant Loading Demonstration (GPLD) was conducted December 2015 at Wallops Flight Facility (WFF), leveraging work performed over recent years to bring lower toxicity hydrazine replacement green propellants to flight missions. The objective of this collaboration between NASA Goddard Space Flight Center (GSFC), WFF, the Swedish National Space Board (SNSB), and Ecological Advanced Propulsion Systems (ECAPS) was to successfully accept LMP-103S propellant at a U.S. Range, store the propellant, and perform a simulated flight vehicle propellant loading. NASA GSFC Propulsion (Code 597) managed all aspects of the operation, handling logistics, preparing the procedures, and implementing the demonstration. In addition to the partnership described above, Moog Inc. developed an LMP-103S propellant-compatible titanium rolling diaphragm flight development tank and loaned it to GSFC to act as the GPLD flight vessel. The flight development tank offered the GPLD an additional level of flight-like propellant handling process and procedures. Moog Inc. also provided a compatible latching isolation valve for remote propellant expulsion. The GPLD operation, in concert with Moog Inc. executed a flight development tank expulsion efficiency performance test using LMP-103S propellant. As part of the demonstration work, GSFC and WFF documented Range safety analyses and practices including all elements of shipping, storage, handling, operations, decontamination, and disposal. LMP-103S has not been previously handled at a U.S. Launch Range. Requisite for this activity was an LMP-103S Risk Analysis Report and Ground Safety Plan. GSFC and WFF safety offices jointly developed safety documentation for application into the GPLD operation. The GPLD along with the GSFC Propulsion historical hydrazine loading experiences offer direct comparison between handling green propellant versus safety intensive, highly toxic hydrazine propellant. These described motives initiated the GPLD operation in order to investigate the handling and process safety variances in project resources between LMP-103S and typical in-space propellants. The GPLD risk reduction operation proved successful for many reasons including handling the green propellant at a U.S. Range, loading and pressurizing a flight-like tank, expelling the propellant, measuring the tank expulsion efficiency, and most significantly, GSFC propulsion personnel's new insight into the LMP-103S propellant handling details.
Green Propellant Loading Demonstration at U.S. Range
NASA Technical Reports Server (NTRS)
Mulkey, Henry W.; Miller, Joseph T.; Bacha, Caitlin E.
2016-01-01
The Green Propellant Loading Demonstration (GPLD) was conducted December 2015 at Wallops Flight Facility (WFF), leveraging work performed over recent years to bring lower toxicity hydrazine replacement green propellants to flight missions. The objective of this collaboration between NASA Goddard Space Flight Center (GSFC), WFF, the Swedish National Space Board (SNSB), and Ecological Advanced Propulsion Systems (ECAPS) was to successfully accept LMP-103S propellant at a U.S. Range, store the propellant, and perform a simulated flight vehicle propellant loading. NASA GSFC Propulsion (Code 597) managed all aspects of the operation, handling logistics, preparing the procedures, and implementing the demonstration. In addition to the partnership described above, Moog Inc. developed an LMP-103S propellant-compatible titanium rolling diaphragm flight development tank and loaned it to GSFC to act as the GPLD flight vessel. The flight development tank offered the GPLD an additional level of flight-like propellant handling process and procedures. Moog Inc. also provided a compatible latching isolation valve for remote propellant expulsion. The GPLD operation, in concert with Moog Inc. executed a flight development tank expulsion efficiency performance test using LMP-103S propellant. As part of the demonstration work, GSFC and WFF documented Range safety analyses and practices including all elements of shipping, storage, handling, operations, decontamination, and disposal. LMP-103S has not been previously handled at a U.S. Launch Range. Requisite for this activity was an LMP-103S Risk Analysis Report and Ground Safety Plan. GSFC and WFF safety offices jointly developed safety documentation for application into the GPLD operation. The GPLD along with the GSFC Propulsion historical hydrazine loading experiences offer direct comparison between handling green propellant versus safety intensive, highly toxic hydrazine propellant. These described motives initiated the GPLD operation in order to investigate the handling and process safety variances in project resources between LMP-103S and typical in-space propellants. The GPLD risk reduction operation proved successful for many reasons including handling the green propellant at a U.S. Range, loading and pressurizing a flight-like tank, expelling the propellant, measuring the tank expulsion efficiency, and most significantly, GSFC propulsion personnel's new insight into the LMP-103S propellant handling details.
Photo-z-SQL: Integrated, flexible photometric redshift computation in a database
NASA Astrophysics Data System (ADS)
Beck, R.; Dobos, L.; Budavári, T.; Szalay, A. S.; Csabai, I.
2017-04-01
We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability to perform in-place photo-z calculation would be a significant advantage. Also, the efficient handling of variable filter sets is a necessity for heterogeneous databases, for example the Hubble Source Catalog, and for cross-match services such as SkyQuery. We illustrate the performance of our code on two reference photo-z estimation testing datasets, and provide an analysis of execution time and scalability with respect to different configurations. The code is available for download at https://github.com/beckrob/Photo-z-SQL.
Juvenile Radio-Tag Study: Lower Granite Dam, 1985 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuehrenberg, Lowell C.
The concept of using mass releases of juvenile radio tags represents a new and potentially powerful research tool that could be effectively applied to juvenile salmonid passage problems at dams on the Columbia and Snake Rivers. A system of detector antennas, strategically located, would automatically detect and record individually tagged juvenile salmonids as they pass through the spillway, powerhouse, bypass system, or tailrace areas below the dam. Accurate measurements of spill effectiveness, fish guiding efficiency (FGE), collection efficiency (CE), spillway survival, powerhouse survival, and bypass survival would be possible without handling large numbers of unmarked fish. A prototype juvenile radio-tagmore » system was developed and tested by the National Marine Fisheries Service (NMFS) and Bonneville Power Administration (BPA) at John Day Dam and at Lower Granite Dam. This report summarizes research to: (1) evaluate the effectiveness of the prototype juvenile radio-tag system in a field situation and (2) to test the basic assumptions inherent in using the juvenile radio tag as a research tool.« less
A subgradient approach for constrained binary optimization via quantum adiabatic evolution
NASA Astrophysics Data System (ADS)
Karimi, Sahar; Ronagh, Pooya
2017-08-01
Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.
pyRMSD: a Python package for efficient pairwise RMSD matrix calculation and handling.
Gil, Víctor A; Guallar, Víctor
2013-09-15
We introduce pyRMSD, an open source standalone Python package that aims at offering an integrative and efficient way of performing Root Mean Square Deviation (RMSD)-related calculations of large sets of structures. It is specially tuned to do fast collective RMSD calculations, as pairwise RMSD matrices, implementing up to three well-known superposition algorithms. pyRMSD provides its own symmetric distance matrix class that, besides the fact that it can be used as a regular matrix, helps to save memory and increases memory access speed. This last feature can dramatically improve the overall performance of any Python algorithm using it. In addition, its extensibility, testing suites and documentation make it a good choice to those in need of a workbench for developing or testing new algorithms. The source code (under MIT license), installer, test suites and benchmarks can be found at https://pele.bsc.es/ under the tools section. victor.guallar@bsc.es Supplementary data are available at Bioinformatics online.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
MedBlock: Efficient and Secure Medical Data Sharing Via Blockchain.
Fan, Kai; Wang, Shangyang; Ren, Yanhui; Li, Hui; Yang, Yintang
2018-06-21
With the development of electronic information technology, electronic medical records (EMRs) have been a common way to store the patients' data in hospitals. They are stored in different hospitals' databases, even for the same patient. Therefore, it is difficult to construct a summarized EMR for one patient from multiple hospital databases due to the security and privacy concerns. Meanwhile, current EMRs systems lack a standard data management and sharing policy, making it difficult for pharmaceutical scientists to develop precise medicines based on data obtained under different policies. To solve the above problems, we proposed a blockchain-based information management system, MedBlock, to handle patients' information. In this scheme, the distributed ledger of MedBlock allows the efficient EMRs access and EMRs retrieval. The improved consensus mechanism achieves consensus of EMRs without large energy consumption and network congestion. In addition, MedBlock also exhibits high information security combining the customized access control protocols and symmetric cryptography. MedBlock can play an important role in the sensitive medical information sharing.
NASA Astrophysics Data System (ADS)
Blanco, K.; Aponte, H.; Vera, E.
2017-12-01
For all Industrial sector is important to extend the useful life of the materials that they use in their process, the scales of CaCO3 are common in situation where fluids are handled with high concentration of ions and besides this temperatures and CO2 concentration dissolved, that scale generates large annual losses because there is a reduction in the process efficiency or corrosion damage under deposit, among other. In order to find new alternatives to this problem, the citric acid was evaluated as scale of calcium carbonate inhibition in critical condition of temperature and concentration of CO2 dissolved. Once the results are obtained it was carried out the statistical evaluation in order to generate an equation that allow to see that behaviour, giving as result, a good efficiency of inhibition to the conditions evaluated the scales of products obtained were characterized through scanning electron microscopy.
A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing
NASA Astrophysics Data System (ADS)
Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai
2015-11-01
High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inman, Jeffrey; Bonnie, David; Broomfield, Matthew
There is a sea (mar is Spanish for sea) of data out there that needs to be handled efficiently. Object Stores are filling the hole of managing large amounts of data efficiently. However, in many cases, and our HPC case in particular, we need a traditional file (POSIX) interface to this data as HPC I/O models have not moved to object interfaces, such as Amazon S3, CDMI, etc.Eventually Object Store providers may deliver file interfaces to their object stores, but at this point those interfaces are not ready to do the job that we need done. MarFS will glue togethermore » two existing scalable components: a file system's scalable metadata component that provides the file interface; and existing scalable object stores (from one or more providers). There will be utilities to do work that is not critical to be done in real-time so that MarFS can manage the space used by objects and allocated to individual users.« less
Nanoparticle separation with a miniaturized asymmetrical flow field-flow fractionation cartridge
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Mello, Andrew J.
2015-01-01
Asymmetrical Flow Field-Flow Fractionation (AF4) is a separation technique applicable to particles over a wide size range. Despite the many advantages of AF4, its adoption in routine particle analysis is somewhat limited by the large footprint of currently available separation cartridges, extended analysis times and significant solvent consumption. To address these issues, we describe the fabrication and characterization of miniaturized AF4 cartridges. Key features of the down-scaled platform include simplified cartridge and reagent handling, reduced analysis costs and higher throughput capacities. The separation performance of the miniaturized cartridge is assessed using certified gold and silver nanoparticle standards. Analysis of gold nanoparticle populations indicates shorter analysis times and increased sensitivity compared to conventional AF4 separation schemes. Moreover, nanoparticulate titanium dioxide populations exhibiting broad size distributions are analyzed in a rapid and efficient manner. Finally, the repeatability and reproducibility of the miniaturized platform are investigated with respect to analysis time and separation efficiency. PMID:26258119
Nanoparticle separation with a miniaturized asymmetrical flow field-flow fractionation cartridge
NASA Astrophysics Data System (ADS)
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; deMello, Andrew
2015-07-01
Asymmetrical Flow Field-Flow Fractionation (AF4) is a separation technique applicable to particles over a wide size range. Despite the many advantages of AF4, its adoption in routine particle analysis is somewhat limited by the large footprint of currently available separation cartridges, extended analysis times and significant solvent consumption. To address these issues, we describe the fabrication and characterization of miniaturized AF4 cartridges. Key features of the scale-down platform include simplified cartridge and reagent handling, reduced analysis costs and higher throughput capacities. The separation performance of the miniaturized cartridge is assessed using certified gold and silver nanoparticle standards. Analysis of gold nanoparticle populations indicates shorter analysis times and increased sensitivity compared to conventional AF4 separation schemes. Moreover, nanoparticulate titanium dioxide populations exhibiting broad size distributions are analyzed in a rapid and efficient manner. Finally, the repeatability and reproducibility of the miniaturized platform are investigated with respect to analysis time and separation efficiency.
A compressible multiphase framework for simulating supersonic atomization
NASA Astrophysics Data System (ADS)
Regele, Jonathan D.; Garrick, Daniel P.; Hosseinzadeh-Nik, Zahra; Aslani, Mohamad; Owkes, Mark
2016-11-01
The study of atomization in supersonic combustors is critical in designing efficient and high performance scramjets. Numerical methods incorporating surface tension effects have largely focused on the incompressible regime as most atomization applications occur at low Mach numbers. Simulating surface tension effects in high speed compressible flow requires robust numerical methods that can handle discontinuities caused by both material interfaces and shocks. A shock capturing/diffused interface method is developed to simulate high-speed compressible gas-liquid flows with surface tension effects using the five-equation model. This includes developments that account for the interfacial pressure jump that occurs in the presence of surface tension. A simple and efficient method for computing local interface curvature is developed and an acoustic non-dimensional scaling for the surface tension force is proposed. The method successfully captures a variety of droplet breakup modes over a range of Weber numbers and demonstrates the impact of surface tension in countering droplet deformation in both subsonic and supersonic cross flows.
Resistive Plate Chambers as thermal neutron detectors
NASA Astrophysics Data System (ADS)
Abbrescia, M.; Mongelli, T.; Paticchio, V.; Ranieri, A.; Trentadue, R.
2003-09-01
We present a construction procedure suitable to make Resistive Plate Chambers detectors sensitive also to thermal neutrons. This procedure, consisting in coating the inner surface of one of the RPC Bakelite electrodes with a mixture of linseed oil and Gd203, is very simple, cheap, and suitable to be employed for industrial, medical or de-mining applications. Here the results of extensive tests aimed to asset the performance of two prototypes of Gd-RPCs are shown. While the detection efficiency to thermal neutrons for a standard not Gd-coated RPC results to be about 0.1%, Gd-RPCs reach, in stand-alone, absolute efficiencies of about 10%, and, when two of these detectors are coupled together, more than 15%. In addition RPCs have excellent time resolution and good imaging performance. This new type, position sensitive gas detector can be operated at atmospheric pressure, is light-weighted, has low γ-ray sensitivity, and is easy to build and handle even when large areas are to be covered.
NASA Astrophysics Data System (ADS)
Authier-Martin, Monique
Dustiness of calcined alumina is a major concern, causing undesirable working conditions and serious alumina losses. These losses occur primarily during unloading and handling or pot loading and crust breaking. The handling side of the problem is first addressed. The Perra pulvimeter constitutes a simple and reproducible tool to quantify handling dustiness and yields results in agreement with plant experience. Attempts are made to correlate dustiness with bulk properties (particle size, attrition index, …) for a large number of diverse aluminas. The characterization of the dust generated with the Perra pulvimeter is most revealing. The effect of the addition of E.S.P. dust is also reported.
Biomass bale stack and field outlet locations assessment for efficient infield logistics
USDA-ARS?s Scientific Manuscript database
Harvested hay or biomass are traditionally baled for better handling and they are transported to the outlet for final utilization. For better management of bale logistics, producers often aggregate bales into stacks so that bale-hauling equipment can haul multiple bales for improved efficiency. Obje...
Pump Propels Liquid And Gas Separately
NASA Technical Reports Server (NTRS)
Harvey, Andrew; Demler, Roger
1993-01-01
Design for pump that handles mixtures of liquid and gas efficiently. Containing only one rotor, pump is combination of centrifuge, pitot pump, and blower. Applications include turbomachinery in powerplants and superchargers in automobile engines. Efficiencies lower than those achieved in separate components. Nevertheless, design is practical and results in low consumption of power.
ERIC Educational Resources Information Center
Research for Better Schools, Inc., Philadelphia, PA.
The process for providing a "thorough and efficient" (T & E) education according to New Jersey statutes and regulations involves six basic steps. This document suggests procedures for handling the fifth step, educational program evaluation. Processes discussed include committee formation, evaluation planning, action plan…
Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zappala, D.; Tavner, P.; Crabtree, C.
2013-01-01
Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data representmore » one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.« less
Shelf life modelling for first-expired-first-out warehouse management
Hertog, Maarten L. A. T. M.; Uysal, Ismail; McCarthy, Ultan; Verlinden, Bert M.; Nicolaï, Bart M.
2014-01-01
In the supply chain of perishable food products, large losses are incurred between farm and fork. Given the limited land resources and an ever-growing population, the food supply chain is faced with the challenge of increasing its handling efficiency and minimizing post-harvest food losses. Huge value can be added by optimizing warehouse management systems, taking into account the estimated remaining shelf life of the product, and matching it to the requirements of the subsequent part of the handling chain. This contribution focuses on how model approaches estimating quality changes and remaining shelf life can be combined in optimizing first-expired-first-out cold chain management strategies for perishable products. To this end, shelf-life-related performance indicators are used to introduce remaining shelf life and product quality in the cost function when optimizing the supply chain. A combinatorial exhaustive-search algorithm is shown to be feasible as the complexity of the optimization problem is sufficiently low for the size and properties of a typical commercial cold chain. The estimated shelf life distances for a particular batch can thus be taken as a guide to optimize logistics. PMID:24797134
Constructing linkage maps in the genomics era with MapDisto 2.0.
Heffelfinger, Christopher; Fragoso, Christopher A; Lorieux, Mathias
2017-07-15
Genotyping by sequencing (GBS) generates datasets that are challenging to handle by current genetic mapping software with graphical interface. Geneticists need new user-friendly computer programs that can analyze GBS data on desktop computers. This requires improvements in computation efficiency, both in terms of speed and use of random-access memory (RAM). MapDisto v.2.0 is a user-friendly computer program for construction of genetic linkage maps. It includes several new major features: (i) handling of very large genotyping datasets like the ones generated by GBS; (ii) direct importation and conversion of Variant Call Format (VCF) files; (iii) detection of linkage, i.e. construction of linkage groups in case of segregation distortion; (iv) data imputation on VCF files using a new approach, called LB-Impute. Features i to iv operate through inclusion of new Java modules that are used transparently by MapDisto; (v) QTL detection via a new R/qtl graphical interface. The program is available free of charge at mapdisto.free.fr. mapdisto@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Handling e-waste in developed and developing countries: initiatives, practices, and consequences.
Sthiannopkao, Suthipong; Wong, Ming Hung
2013-10-01
Discarded electronic goods contain a range of toxic materials requiring special handling. Developed countries have conventions, directives, and laws to regulate their disposal, most based on extended producer responsibility. Manufacturers take back items collected by retailers and local governments for safe destruction or recovery of materials. Compliance, however, is difficult to assure, and frequently runs against economic incentives. The expense of proper disposal leads to the shipment of large amounts of e-waste to China, India, Pakistan, Nigeria, and other developing countries. Shipment is often through middlemen, and under tariff classifications that make quantities difficult to assess. There, despite the intents of national regulations and hazardous waste laws, most e-waste is treated as general refuse, or crudely processed, often by burning or acid baths, with recovery of only a few materials of value. As dioxins, furans, and heavy metals are released, harm to the environment, workers, and area residents is inevitable. The faster growth of e-waste generated in the developing than in the developed world presages continued expansion of a pervasive and inexpensive informal processing sector, efficient in its own way, but inherently hazard-ridden. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
The National Spallation Neutron Source Target Station.
NASA Astrophysics Data System (ADS)
Gabriel, T. A.
1997-05-01
The technologies that are being utilized to design and build a state-of-the-art high powered (>= 1 MW), short pulsed (<= 1 μsec), and reliable spallation neutron source target station are discussed. The protons which directly and indirectly produce the neutrons will be obtained from a 1 GeV proton accelerator composed of an ion gun, rfq, linac, and storage ring. Many scientific and technical disciplines are required to produce a successful target station. These disciplines include engineering, remote handling, neutronics, materials, thermal hydraulics, shock analysis, etc. In the areas of engineering and remote handling special emphasis is being given to rapid and efficient assembly and disassembly of critical parts of the target station. In the neutronics area, emphasis is being given to neutron yield and pulse optimization from the moderators, and heating and activation rates throughout the station. Development of structural materials to withstand aggressive radiation environments and that are compatible with other materials is also an important area. Thermal hydraulics and shock analysis are being closely studied since large amounts of energy are being deposited in small volumes in relatively short time periods (< 1 μsec). These areas will be expanded upon in the paper.
Ko, K Y; Ahn, D U
2007-02-01
The objective of this study was to develop an economical, simple, and large-scale separation method for IgY from egg yolk. Egg yolk diluted with 9 volumes of cold water was centrifuged after adjusting the pH to 5.0. The supernatant was added with 0.01% charcoal or 0.01% carrageenan and centrifuged at 2,800 x g for 30 min. The supernatant was filtered through a Whatman no. 1 filter paper and then the filtrate was concentrated to 20% original volume using ultrafiltration. The concentrated solution was further purified using either cation exchange chromatography or ammonium sulfate precipitation. For the cation exchange chromatography method, the concentrated sample was loaded onto a column equilibrated with 20 mM citrate-phosphate buffer at pH 4.8 and eluted with 200 mM citrate-phosphate buffer at pH 6.4. For the ammonium sulfate precipitation method, the concentrated sample was twice precipitated with 40% ammonium sulfate solution at pH 9.0. The yield and purity of IgY were determined by ELISA and electrophoresis. The yield of IgY from the cation exchange chromatography method was 30 to 40%, whereas that of the ammonium sulfate precipitation was 70 to 80%. The purity of IgY from the ammonium sulfate method was higher than that of the cation exchange chromatography. The cation exchange chromatography could handle only a small amount of samples, whereas the ammonium sulfate precipitation could handle a large volume of samples. This suggests that ammonium sulfate precipitation was a more efficient and useful purification method than cation exchange chromatography for the large-scale preparation of IgY from egg yolk.
Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System.
Conway, Michael K; Gerger, Michael J; Balay, Erin E; O'Connell, Rachel; Hanson, Seth; Daily, Neil J; Wakatsuki, Tetsuro
2015-05-14
Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g., cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.
Multitarget-multisensor management for decentralized sensor networks
NASA Astrophysics Data System (ADS)
Tharmarasa, R.; Kirubarajan, T.; Sinha, A.; Hernandez, M. L.
2006-05-01
In this paper, we consider the problem of sensor resource management in decentralized tracking systems. Due to the availability of cheap sensors, it is possible to use a large number of sensors and a few fusion centers (FCs) to monitor a large surveillance region. Even though a large number of sensors are available, due to frequency, power and other physical limitations, only a few of them can be active at any one time. The problem is then to select sensor subsets that should be used by each FC at each sampling time in order to optimize the tracking performance subject to their operational constraints. In a recent paper, we proposed an algorithm to handle the above issues for joint detection and tracking, without using simplistic clustering techniques that are standard in the literature. However, in that paper, a hierarchical architecture with feedback at every sampling time was considered, and the sensor management was performed only at a central fusion center (CFC). However, in general, it is not possible to communicate with the CFC at every sampling time, and in many cases there may not even be a CFC. Sometimes, communication between CFC and local fusion centers might fail as well. Therefore performing sensor management only at the CFC is not viable in most networks. In this paper, we consider an architecture in which there is no CFC, each FC communicates only with the neighboring FCs, and communications are restricted. In this case, each FC has to decide which sensors are to be used by itself at each measurement time step. We propose an efficient algorithm to handle the above problem in real time. Simulation results illustrating the performance of the proposed algorithm are also presented.
On the Modeling of Shells in Multibody Dynamics
NASA Technical Reports Server (NTRS)
Bauchau, Olivier A.; Choi, Jou-Young; Bottasso, Carlo L.
2000-01-01
Energy preserving/decaying schemes are presented for the simulation of the nonlinear multibody systems involving shell components. The proposed schemes are designed to meet four specific requirements: unconditional nonlinear stability of the scheme, a rigorous treatment of both geometric and material nonlinearities, exact satisfaction of the constraints, and the presence of high frequency numerical dissipation. The kinematic nonlinearities associated with arbitrarily large displacements and rotations of shells are treated in a rigorous manner, and the material nonlinearities can be handled when the, constitutive laws stem from the existence of a strain energy density function. The efficiency and robustness of the proposed approach is illustrated with specific numerical examples that also demonstrate the need for integration schemes possessing high frequency numerical dissipation.
Testing for entanglement with periodic coarse graining
NASA Astrophysics Data System (ADS)
Tasca, D. S.; Rudnicki, Łukasz; Aspden, R. S.; Padgett, M. J.; Souto Ribeiro, P. H.; Walborn, S. P.
2018-04-01
Continuous-variable systems find valuable applications in quantum information processing. To deal with an infinite-dimensional Hilbert space, one in general has to handle large numbers of discretized measurements in tasks such as entanglement detection. Here we employ the continuous transverse spatial variables of photon pairs to experimentally demonstrate entanglement criteria based on a periodic structure of coarse-grained measurements. The periodization of the measurements allows an efficient evaluation of entanglement using spatial masks acting as mode analyzers over the entire transverse field distribution of the photons and without the need to reconstruct the probability densities of the conjugate continuous variables. Our experimental results demonstrate the utility of the derived criteria with a success rate in entanglement detection of ˜60 % relative to 7344 studied cases.
Consistent Chemical Mechanism from Collaborative Data Processing
Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...
2016-04-01
Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less
Sahn, James J; Granger, Brett A; Martin, Stephen F
2014-10-21
A strategy for generating diverse collections of small molecules has been developed that features a multicomponent assembly process (MCAP) to efficiently construct a variety of intermediates possessing an aryl aminomethyl subunit. These key compounds are then transformed via selective ring-forming reactions into heterocyclic scaffolds, each of which possesses suitable functional handles for further derivatizations and palladium-catalyzed cross coupling reactions. The modular nature of this approach enables the facile construction of libraries of polycyclic compounds bearing a broad range of substituents and substitution patterns for biological evaluation. Screening of several compound libraries thus produced has revealed a large subset of compounds that exhibit a broad spectrum of medicinally-relevant activities.
Capability 9.3 Assembly and Deployment
NASA Technical Reports Server (NTRS)
Dorsey, John
2005-01-01
Large space systems are required for a range of operational, commercial and scientific missions objectives however, current launch vehicle capacities substantially limit the size of space systems (on-orbit or planetary). Assembly and Deployment is the process of constructing a spacecraft or system from modules which may in turn have been constructed from sub-modules in a hierarchical fashion. In-situ assembly of space exploration vehicles and systems will require a broad range of operational capabilities, including: Component transfer and storage, fluid handling, construction and assembly, test and verification. Efficient execution of these functions will require supporting infrastructure, that can: Receive, store and protect (materials, components, etc.); hold and secure; position, align and control; deploy; connect/disconnect; construct; join; assemble/disassemble; dock/undock; and mate/demate.
Recent advances in nanomaterials for water protection and monitoring.
Das, Rasel; Vecitis, Chad D; Schulze, Agnes; Cao, Bin; Ismail, Ahmad Fauzi; Lu, Xianbo; Chen, Jiping; Ramakrishna, Seeram
2017-11-13
The efficient handling of wastewater pollutants is a must, since they are continuously defiling limited fresh water resources, seriously affecting the terrestrial, aquatic, and aerial flora and fauna. Our vision is to undertake an exhaustive examination of current research trends with a focus on nanomaterials (NMs) to considerably improve the performance of classical wastewater treatment technologies, e.g. adsorption, catalysis, separation, and disinfection. Additionally, NM-based sensor technologies are considered, since they have been significantly used for monitoring water contaminants. We also suggest future directions to inform investigators of potentially disruptive NM technologies that have to be investigated in more detail. The fate and environmental transformations of NMs, which need to be addressed before large-scale implementation of NMs for water purification, are also highlighted.
Tight-binding calculation studies of vacancy and adatom defects in graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wei; Lu, Wen-Cai; Zhang, Hong-Xing
2016-02-19
Computational studies of complex defects in graphene usually need to deal with a larger number of atoms than the current first-principles methods can handle. We show a recently developed three-center tight-binding potential for carbon is very efficient for large scale atomistic simulations and can accurately describe the structures and energies of various defects in graphene. Using the three-center tight-binding potential, we have systematically studied the stable structures and formation energies of vacancy and embedded-atom defects of various sizes up to 4 vacancies and 4 embedded atoms in graphene. In conclusion, our calculations reveal low-energy defect structures and provide a moremore » comprehensive understanding of the structures and stability of defects in graphene.« less
Final Report for File System Support for Burst Buffers on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, W.; Mohror, K.
Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lee, Corinne; Knight, Suzanne W; Smith, Sharon L; Nagle, Dorothy J; DeVries, Lori
This article addresses the development, implementation, and evaluation of an education program for safe patient handling and mobility at a large academic medical center. The ultimate goal of the program was to increase safety during patient mobility/transfer and reduce nursing staff injury from lifting/pulling. This comprehensive program was designed on the basis of the principles of prework, application, and support at the point of care. A combination of online learning, demonstration, skill evaluation, and coaching at the point of care was used to achieve the goal. Specific roles and responsibilities were developed to facilitate implementation. It took 17 master trainers, 88 certified trainers, 176 unit-based trainers, and 98 coaches to put 3706 nurses and nursing assistants through the program. Evaluations indicated both an increase in knowledge about safe patient handling and an increased ability to safely mobilize patients. The challenge now is sustainability of safe patient-handling practices and the growth and development of trainers and coaches.
Ahene, Ago; Calonder, Claudio; Davis, Scott; Kowalchick, Joseph; Nakamura, Takahiro; Nouri, Parya; Vostiar, Igor; Wang, Yang; Wang, Jin
2014-01-01
In recent years, the use of automated sample handling instrumentation has come to the forefront of bioanalytical analysis in order to ensure greater assay consistency and throughput. Since robotic systems are becoming part of everyday analytical procedures, the need for consistent guidance across the pharmaceutical industry has become increasingly important. Pre-existing regulations do not go into sufficient detail in regard to how to handle the use of robotic systems for use with analytical methods, especially large molecule bioanalysis. As a result, Global Bioanalytical Consortium (GBC) Group L5 has put forth specific recommendations for the validation, qualification, and use of robotic systems as part of large molecule bioanalytical analyses in the present white paper. The guidelines presented can be followed to ensure that there is a consistent, transparent methodology that will ensure that robotic systems can be effectively used and documented in a regulated bioanalytical laboratory setting. This will allow for consistent use of robotic sample handling instrumentation as part of large molecule bioanalysis across the globe.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Program for solution of ordinary differential equations
NASA Technical Reports Server (NTRS)
Sloate, H.
1973-01-01
A program for the solution of linear and nonlinear first order ordinary differential equations is described and user instructions are included. The program contains a new integration algorithm for the solution of initial value problems which is particularly efficient for the solution of differential equations with a wide range of eigenvalues. The program in its present form handles up to ten state variables, but expansion to handle up to fifty state variables is being investigated.
Measurement of Rubidium Number Density Under Optically Thick Conditions
2010-11-15
for efficient, high-power laser systems . While these alkali metals offer great promise, there are several issues which need to be resolved. Two such...circulator. The pressure and composition of the diluent within the heat pipe could also be adjusted using the attached gas handling system . The gas...handling system consisted of a vacuum pump, 10 Torr and 1000 Torr baratrons, various valves and a line going to a regulated gas cylinder. The second
CANISTER TRANSFER SYSTEM DESCRIPTION DOCUMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Gorpani
2000-06-23
The Canister Transfer System receives transportation casks containing large and small disposable canisters, unloads the canisters from the casks, stores the canisters as required, loads them into disposal containers (DCs), and prepares the empty casks for re-shipment. Cask unloading begins with cask inspection, sampling, and lid bolt removal operations. The cask lids are removed and the canisters are unloaded. Small canisters are loaded directly into a DC, or are stored until enough canisters are available to fill a DC. Large canisters are loaded directly into a DC. Transportation casks and related components are decontaminated as required, and empty casks aremore » prepared for re-shipment. One independent, remotely operated canister transfer line is provided in the Waste Handling Building System. The canister transfer line consists of a Cask Transport System, Cask Preparation System, Canister Handling System, Disposal Container Transport System, an off-normal canister handling cell with a transfer tunnel connecting the two cells, and Control and Tracking System. The Canister Transfer System operating sequence begins with moving transportation casks to the cask preparation area with the Cask Transport System. The Cask Preparation System prepares the cask for unloading and consists of cask preparation manipulator, cask inspection and sampling equipment, and decontamination equipment. The Canister Handling System unloads the canister(s) and places them into a DC. Handling equipment consists of a bridge crane hoist, DC loading manipulator, lifting fixtures, and small canister staging racks. Once the cask has been unloaded, the Cask Preparation System decontaminates the cask exterior and returns it to the Carrier/Cask Handling System via the Cask Transport System. After the DC is fully loaded, the Disposal Container Transport System moves the DC to the Disposal Container Handling System for welding. To handle off-normal canisters, a separate off-normal canister handling cell is located adjacent to the canister transfer cell and is interconnected to the transfer cell by means of the off-normal canister transfer tunnel. All canister transfer operations are controlled by the Control and Tracking System. The system interfaces with the Carrier/Cask Handling System for incoming and outgoing transportation casks. The system also interfaces with the Disposal Container Handling System, which prepares the DC for loading and subsequently seals the loaded DC. The system support interfaces are the Waste Handling Building System and other internal Waste Handling Building (WHB) support systems.« less
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
SITE GENERATED RADIOLOGICAL WASTE HANDLING SYSTEM DESCRIPTION DOCUMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. C. Khamankar
2000-06-20
The Site Generated Radiological Waste Handling System handles radioactive waste products that are generated at the geologic repository operations area. The waste is collected, treated if required, packaged for shipment, and shipped to a disposal site. Waste streams include low-level waste (LLW) in solid and liquid forms, as-well-as mixed waste that contains hazardous and radioactive constituents. Liquid LLW is segregated into two streams, non-recyclable and recyclable. The non-recyclable stream may contain detergents or other non-hazardous cleaning agents and is packaged for shipment. The recyclable stream is treated to recycle a large portion of the water while the remaining concentrated wastemore » is packaged for shipment; this greatly reduces the volume of waste requiring disposal. There will be no liquid LLW discharge. Solid LLW consists of wet solids such as ion exchange resins and filter cartridges, as-well-as dry active waste such as tools, protective clothing, and poly bags. Solids will be sorted, volume reduced, and packaged for shipment. The generation of mixed waste at the Monitored Geologic Repository (MGR) is not planned; however, if it does come into existence, it will be collected and packaged for disposal at its point of occurrence, temporarily staged, then shipped to government-approved off-site facilities for disposal. The Site Generated Radiological Waste Handling System has equipment located in both the Waste Treatment Building (WTB) and in the Waste Handling Building (WHB). All types of liquid and solid LLW are processed in the WTB, while wet solid waste from the Pool Water Treatment and Cooling System is packaged where received in the WHB. There is no installed hardware for mixed waste. The Site Generated Radiological Waste Handling System receives waste from locations where water is used for decontamination functions. In most cases the water is piped back to the WTB for processing. The WTB and WHB provide staging areas for storing and shipping LLW packages as well as any mixed waste packages. The buildings house the system and provide shielding and support for the components. The system is ventilated by and connects to the ventilation systems in the buildings to prevent buildup and confine airborne radioactivity via the high efficiency particulate air filters. The Monitored Geologic Repository Operations Monitoring and Control System will provide monitoring and supervisory control facilities for the system.« less
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
Molecular epidemiology biomarkers-Sample collection and processing considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Nina T.; Pfleger, Laura; Berger, Eileen
2005-08-07
Biomarker studies require processing and storage of numerous biological samples with the goals of obtaining a large amount of information and minimizing future research costs. An efficient study design includes provisions for processing of the original samples, such as cryopreservation, DNA isolation, and preparation of specimens for exposure assessment. Use of standard, two-dimensional and nanobarcodes and customized electronic databases assure efficient management of large sample collections and tracking results of data analyses. Standard operating procedures and quality control plans help to protect sample quality and to assure validity of the biomarker data. Specific state, federal and international regulations are inmore » place regarding research with human samples, governing areas including custody, safety of handling, and transport of human samples. Appropriate informed consent must be obtained from the study subjects prior to sample collection and confidentiality of results maintained. Finally, examples of three biorepositories of different scale (European Cancer Study, National Cancer Institute and School of Public Health Biorepository, University of California, Berkeley) are used to illustrate challenges faced by investigators and the ways to overcome them. New software and biorepository technologies are being developed by many companies that will help to bring biological banking to a new level required by molecular epidemiology of the 21st century.« less
Role of Chemical Reactivity and Transition State Modeling for Virtual Screening.
Karthikeyan, Muthukumarasamy; Vyas, Renu; Tambe, Sanjeev S; Radhamohan, Deepthi; Kulkarni, Bhaskar D
2015-01-01
Every drug discovery research program involves synthesis of a novel and potential drug molecule utilizing atom efficient, economical and environment friendly synthetic strategies. The current work focuses on the role of the reactivity based fingerprints of compounds as filters for virtual screening using a tool ChemScore. A reactant-like (RLS) and a product- like (PLS) score can be predicted for a given compound using the binary fingerprints derived from the numerous known organic reactions which capture the molecule-molecule interactions in the form of addition, substitution, rearrangement, elimination and isomerization reactions. The reaction fingerprints were applied to large databases in biology and chemistry, namely ChEMBL, KEGG, HMDB, DSSTox, and the Drug Bank database. A large network of 1113 synthetic reactions was constructed to visualize and ascertain the reactant product mappings in the chemical reaction space. The cumulative reaction fingerprints were computed for 4000 molecules belonging to 29 therapeutic classes of compounds, and these were found capable of discriminating between the cognition disorder related and anti-allergy compounds with reasonable accuracy of 75% and AUC 0.8. In this study, the transition state based fingerprints were also developed and used effectively for virtual screening in drug related databases. The methodology presented here provides an efficient handle for the rapid scoring of molecular libraries for virtual screening.
NOVEL BINDERS AND METHODS FOR AGGLOMERATION OF ORE
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.K. Kawatra; T.C. Eisele; J.A. Gurtler
2005-04-01
Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not breakdown during processing. However, for many important metal extraction processes there are no binders known that will workmore » satisfactorily. Primary examples of this are copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process. As a result, operators of many facilities see large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching.« less
Novel Binders and Methods for Agglomeration of Ore
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. K. Kawatra; T. C. Eisele; J. A. Gurtler
2004-03-31
Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily. A primary example of this is copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process. As a result, operators of acidic heap-leach facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of other agglomeration applications, particularly advanced primary ironmaking.« less
Framework for Parallel Preprocessing of Microarray Data Using Hadoop
2018-01-01
Nowadays, microarray technology has become one of the popular ways to study gene expression and diagnosis of disease. National Center for Biology Information (NCBI) hosts public databases containing large volumes of biological data required to be preprocessed, since they carry high levels of noise and bias. Robust Multiarray Average (RMA) is one of the standard and popular methods that is utilized to preprocess the data and remove the noises. Most of the preprocessing algorithms are time-consuming and not able to handle a large number of datasets with thousands of experiments. Parallel processing can be used to address the above-mentioned issues. Hadoop is a well-known and ideal distributed file system framework that provides a parallel environment to run the experiment. In this research, for the first time, the capability of Hadoop and statistical power of R have been leveraged to parallelize the available preprocessing algorithm called RMA to efficiently process microarray data. The experiment has been run on cluster containing 5 nodes, while each node has 16 cores and 16 GB memory. It compares efficiency and the performance of parallelized RMA using Hadoop with parallelized RMA using affyPara package as well as sequential RMA. The result shows the speed-up rate of the proposed approach outperforms the sequential approach and affyPara approach. PMID:29796018
Concept for power scaling second harmonic generation using a cascade of nonlinear crystals.
Hansen, A K; Tawfieq, M; Jensen, O B; Andersen, P E; Sumpf, B; Erbert, G; Petersen, P M
2015-06-15
Within the field of high-power second harmonic generation (SHG), power scaling is often hindered by adverse crystal effects such as thermal dephasing arising from the second harmonic (SH) light, which imposes limits on the power that can be generated in many crystals. Here we demonstrate a concept for efficient power scaling of single-pass SHG beyond such limits using a cascade of nonlinear crystals, in which the first crystal is chosen for high nonlinear efficiency and the subsequent crystal(s) are chosen for power handling ability. Using this highly efficient single-pass concept, we generate 3.7 W of continuous-wave diffraction-limited (M(2)=1.25) light at 532 nm from 9.5 W of non-diffraction-limited (M(2)=7.7) light from a tapered laser diode, while avoiding significant thermal effects. Besides constituting the highest SH power yet achieved using a laser diode, this demonstrates that the concept successfully combines the high efficiency of the first stage with the good power handling properties of the subsequent stages. The concept is generally applicable and can be expanded with more stages to obtain even higher efficiency, and extends also to other combinations of nonlinear media suitable for other wavelengths.
Comfortable, high-efficiency heat pump with desiccant-coated, water-sorbing heat exchangers
NASA Astrophysics Data System (ADS)
Tu, Y. D.; Wang, R. Z.; Ge, T. S.; Zheng, X.
2017-01-01
Comfortable, efficient, and affordable heating, ventilation, and air conditioning systems in buildings are highly desirable due to the demands of energy efficiency and environmental friendliness. Traditional vapor-compression air conditioners exhibit a lower coefficient of performance (COP) (typically 2.8-3.8) owing to the cooling-based dehumidification methods that handle both sensible and latent loads together. Temperature- and humidity-independent control or desiccant systems have been proposed to overcome these challenges; however, the COP of current desiccant systems is quite small and additional heat sources are usually needed. Here, we report on a desiccant-enhanced, direct expansion heat pump based on a water-sorbing heat exchanger with a desiccant coating that exhibits an ultrahigh COP value of more than 7 without sacrificing any comfort or compactness. The pump’s efficiency is doubled compared to that of pumps currently used in conventional room air conditioners, which is a revolutionary HVAC breakthrough. Our proposed water-sorbing heat exchanger can independently handle sensible and latent loads at the same time. The desiccants adsorb moisture almost isothermally and can be regenerated by condensation heat. This new approach opens up the possibility of achieving ultrahigh efficiency for a broad range of temperature- and humidity-control applications.
Comfortable, high-efficiency heat pump with desiccant-coated, water-sorbing heat exchangers.
Tu, Y D; Wang, R Z; Ge, T S; Zheng, X
2017-01-12
Comfortable, efficient, and affordable heating, ventilation, and air conditioning systems in buildings are highly desirable due to the demands of energy efficiency and environmental friendliness. Traditional vapor-compression air conditioners exhibit a lower coefficient of performance (COP) (typically 2.8-3.8) owing to the cooling-based dehumidification methods that handle both sensible and latent loads together. Temperature- and humidity-independent control or desiccant systems have been proposed to overcome these challenges; however, the COP of current desiccant systems is quite small and additional heat sources are usually needed. Here, we report on a desiccant-enhanced, direct expansion heat pump based on a water-sorbing heat exchanger with a desiccant coating that exhibits an ultrahigh COP value of more than 7 without sacrificing any comfort or compactness. The pump's efficiency is doubled compared to that of pumps currently used in conventional room air conditioners, which is a revolutionary HVAC breakthrough. Our proposed water-sorbing heat exchanger can independently handle sensible and latent loads at the same time. The desiccants adsorb moisture almost isothermally and can be regenerated by condensation heat. This new approach opens up the possibility of achieving ultrahigh efficiency for a broad range of temperature- and humidity-control applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan Vee; Delgado-Frias, Jose
Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configuremore » the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.« less
Development of Handling Qualities Criteria for Rotorcraft with Externally Slung Loads
NASA Technical Reports Server (NTRS)
Hoh, Roger H.; Heffley, Robert K.; Mitchell, David G.
2006-01-01
Piloted simulations were performed on the NASA-Ames Vertical Motion Simulator (VMS) to explore handling qualities issues for large cargo helicopters, particularly focusing on external slung load operations. The purpose of this work was based upon the need to include handling qualities criteria for cargo helicopters in an upgrade to the U.S. Army's rotorcraft handling qualities specification, Aeronautical Design Standard-33 (ADS-33E-PRF). From the VMS results, handling qualities criteria were developed fro cargo helicopters carrying external slung loads in the degraded visual environment (DVE). If satisfied, these criteria provide assurance that the handling quality rating (HQR) will be 4 or better for operations in the DVE, and with a load mass ratio of 0.33 or less. For lighter loads, flying qualities were found to be less dependent on the load geometry and therefore the significance of the criteria is less. For heavier loads, meeting the criteria ensures the best possible handling qualities, albeit Level 2 for load mass ratios greater than 0.33.
Kunig, Verena; Potowski, Marco; Gohla, Anne; Brunschweiger, Andreas
2018-06-27
DNA-encoded compound libraries are a highly attractive technology for the discovery of small molecule protein ligands. These compound collections consist of small molecules covalently connected to individual DNA sequences carrying readable information about the compound structure. DNA-tagging allows for efficient synthesis, handling and interrogation of vast numbers of chemically synthesized, drug-like compounds. They are screened on proteins by an efficient, generic assay based on Darwinian principles of selection. To date, selection of DNA-encoded libraries allowed for the identification of numerous bioactive compounds. Some of these compounds uncovered hitherto unknown allosteric binding sites on target proteins; several compounds proved their value as chemical biology probes unraveling complex biology; and the first examples of clinical candidates that trace their ancestry to a DNA-encoded library were reported. Thus, DNA-encoded libraries proved their value for the biomedical sciences as a generic technology for the identification of bioactive drug-like molecules numerous times. However, large scale experiments showed that even the selection of billions of compounds failed to deliver bioactive compounds for the majority of proteins in an unbiased panel of target proteins. This raises the question of compound library design.
MIMO: an efficient tool for molecular interaction maps overlap
2013-01-01
Background Molecular pathways represent an ensemble of interactions occurring among molecules within the cell and between cells. The identification of similarities between molecular pathways across organisms and functions has a critical role in understanding complex biological processes. For the inference of such novel information, the comparison of molecular pathways requires to account for imperfect matches (flexibility) and to efficiently handle complex network topologies. To date, these characteristics are only partially available in tools designed to compare molecular interaction maps. Results Our approach MIMO (Molecular Interaction Maps Overlap) addresses the first problem by allowing the introduction of gaps and mismatches between query and template pathways and permits -when necessary- supervised queries incorporating a priori biological information. It then addresses the second issue by relying directly on the rich graph topology described in the Systems Biology Markup Language (SBML) standard, and uses multidigraphs to efficiently handle multiple queries on biological graph databases. The algorithm has been here successfully used to highlight the contact point between various human pathways in the Reactome database. Conclusions MIMO offers a flexible and efficient graph-matching tool for comparing complex biological pathways. PMID:23672344
Feeding methods and efficiencies of selected frugivorous birds
Foster, M.S.
1987-01-01
I report on handling methods and efficiencies of 26 species of Paraguayan birds freeding on fruits of Allophyllus edulis (Sapindaceae). A bird may swallow fruits whole (Type I: pluck and swallow feeders), hold a fruit and cut the pulp from the seed with the edge of the bill, swallowing the pulp but not the seed (Type II: cut or mash feeders), or take bites of pulp from a fruit that hangs from the tree or that is held and manipulated against a branch (Type III: push and bite feeders). In terms of absolute amount of pulp obtained from a fruit, and amount obtained per unit time. Type I species are far more efficient than Type II and III species. Bill morphology influences feeding methods but is not the only important factor. Diet breadth does not appear to be significant. Consideration of feeding efficiency relative to the needs of the birds indicates that these species need to spend relatively little time feeding to meet their estimated energetic needs, and that handling time has a relatively trivial effect on the time/energy budges of the bird species observed.
MrEnt: an editor for publication-quality phylogenetic tree illustrations.
Zuccon, Alessandro; Zuccon, Dario
2014-09-01
We developed MrEnt, a Windows-based, user-friendly software that allows the production of complex, high-resolution, publication-quality phylogenetic trees in few steps, directly from the analysis output. The program recognizes the standard Nexus tree format and the annotated tree files produced by BEAST and MrBayes. MrEnt combines in a single software a large suite of tree manipulation functions (e.g. handling of multiple trees, tree rotation, character mapping, node collapsing, compression of large clades, handling of time scale and error bars for chronograms) with drawing tools typical of standard graphic editors, including handling of graphic elements and images. The tree illustration can be printed or exported in several standard formats suitable for journal publication, PowerPoint presentation or Web publication. © 2014 John Wiley & Sons Ltd.
Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa
2010-02-21
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.
Heavy Analysis and Light Virtualization of Water Use Data with Python
NASA Astrophysics Data System (ADS)
Kim, H.; Bijoor, N.; Famiglietti, J. S.
2014-12-01
Water utilities possess a large amount of water data that could be used to inform urban ecohydrology, management decisions, and conservation policies, but such data are rarely analyzed owing to difficulty in analyzation, visualization, and interpretion. We have developed a high performance computing resource for this purpose. We partnered with 6 water agencies in Orange County who provided 10 years of parcel-level monthly water use billing data for a pilot study. The first challenge that we overcame was to refine all human errors and unify the many different formats of data over all agencies. Second, we tested and applied experimental approaches to the data, including complex calculations, with high efficiency. Third, we developed a method to refine the data so it can be browsed along a time series index and/or geo-spatial queries with high efficiency, no matter how large the data. Python scientific libraries were the best match to handle arbitrary data sets in our environment. Further milestones include agency entry, sets of formulae, and maintaining 15M rows X 70 columns of data with high performance of cpu-bound processes. To deal with billions of rows, we performed an analysis virtualization stack by leveraging iPython parallel computing. With this architecture, one agency could be considered one computing node or virtual machine that maintains its own data sets respectively. For example, a big agency could use a large node, and a small agency could use a micro node. Under the minimum required raw data specs, more agencies could be analyzed. The program developed in this study simplifies data analysis, visualization, and interpretation of large water datasets, and can be used to analyze large data volumes from water agencies nationally or worldwide.
Development of a novel and highly efficient method of isolating bacteriophages from water.
Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang
2017-08-01
Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.
Toward a Framework for Learner Segmentation
ERIC Educational Resources Information Center
Azarnoush, Bahareh; Bekki, Jennifer M.; Runger, George C.; Bernstein, Bianca L.; Atkinson, Robert K.
2013-01-01
Effectively grouping learners in an online environment is a highly useful task. However, datasets used in this task often have large numbers of attributes of disparate types and different scales, which traditional clustering approaches cannot handle effectively. Here, a unique dissimilarity measure based on the random forest, which handles the…
Information Systems and Performance Measures in Schools.
ERIC Educational Resources Information Center
Coleman, James S.; Karweit, Nancy L.
Large school systems bring various administrative problems in handling scheduling, records, and avoiding making red tape casualties of students. The authors review a portion of the current use of computers to handle these problems and examine the range of activities for which computer processing could provide aid. Since automation always brings…
Handle grip span for optimising finger-specific force capability as a function of hand size.
Lee, Soo-Jin; Kong, Yong-Ku; Lowe, Brian D; Song, Seongho
2009-05-01
Five grip spans (45 to 65 mm) were tested to evaluate the effects of handle grip span and user's hand size on maximum grip strength, individual finger force and subjective ratings of comfort using a computerised digital dynamometer with independent finger force sensors. Forty-six males participated and were assigned into three hand size groups (small, medium, large) according to their hands' length. In general, results showed the 55- and 50-mm grip spans were rated as the most comfortable sizes and showed the largest grip strength (433.6 N and 430.8 N, respectively), whereas the 65-mm grip span handle was rated as the least comfortable size and the least grip strength. With regard to the interaction effect of grip span and hand size, small and medium-hand participants rated the best preference for the 50- to 55-mm grip spans and the least for the 65-mm grip span, whereas large-hand participants rated the 55- to 60-mm grip spans as the most preferred and the 45-mm grip span as the least preferred. Normalised grip span (NGS) ratios (29% and 27%) are the ratios of user's hand length to handle grip span. The NGS ratios were obtained and applied for suggesting handle grip spans in order to maximise subjective comfort as well as gripping force according to the users' hand sizes. In the analysis of individual finger force, the middle finger force showed the highest contribution (37.5%) to the total finger force, followed by the ring (28.7%), index (20.2%) and little (13.6%) finger. In addition, each finger was observed to have a different optimal grip span for exerting the maximum force, resulting in a bow-contoured shaped handle (the grip span of the handle at the centre is larger than the handle at the end) for two-handle hand tools. Thus, the grip spans for two-handle hand tools may be designed according to the users' hand/finger anthropometrics to maximise subjective ratings and performance based on this study. Results obtained in this study will provide guidelines for hand tool designers and manufacturers for designing grip spans of two-handle tools, which can maximise handle comfort and performance.
NASA Astrophysics Data System (ADS)
Griffin, J. W.; Popov, A. A.
2018-07-01
It is now possible, through electrical, hydraulic or mechanical means, to power the front wheel of a motorcycle. The aim of this is often to improve performance in limit-handling scenarios including off-road low-traction conditions and on-road high-speed cornering. Following on from research into active torque distribution in 4-wheeled vehicles, the possibility exists for efficiency improvements to be realised by reducing the total amount of energy dissipated as slip at the wheel-road contact. This paper presents the results of an investigation into the effect that varying the torque distribution ratio has on the energy consumption of the two-wheeled vehicle. A 13-degree of freedom multibody model was created, which includes the effects of suspension, aerodynamics and gyroscopic bodies. SimMechanics, from the MathWorks?, is used for automatic generation of equations of motion and time-domain simulation, in conjunction with MATLAB and Simulink. A simple driver model is used to control the speed and yaw rate of the motorcycle. The handling characteristics of the motorcycle are quantitatively analysed, and the impact of torque distribution on energy consumption is considered during straight line and cornering situations. The investigation has shown that only a small improvement in efficiency can be made by transferring a portion of the drive torque to the front wheel. Tyre longevity could be improved by reduced slip energy dissipation.
Analyzing and Visualizing Cosmological Simulations with ParaView
NASA Astrophysics Data System (ADS)
Woodring, Jonathan; Heitmann, Katrin; Ahrens, James; Fasel, Patricia; Hsu, Chung-Hsing; Habib, Salman; Pope, Adrian
2011-07-01
The advent of large cosmological sky surveys—ushering in the era of precision cosmology—has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2017-04-01
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.
NOVEL BINDERS AND METHODS FOR AGGLOMERATION OF ORE
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.K. Kawatra; T.C. Eisele; J.A. Gurtler
2004-04-01
Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily. Primary examples of this are copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process, and advanced ironmaking processes, where binders must function satisfactorily over an extraordinarily large range of temperatures (from room temperature up to over 1200 C). As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching and advanced primary ironmaking.« less
Interactive (statistical) visualisation and exploration of a billion objects with vaex
NASA Astrophysics Data System (ADS)
Breddels, M. A.
2017-06-01
With new catalogues arriving such as the Gaia DR1, containing more than a billion objects, new methods of handling and visualizing these data volumes are needed. We show that by calculating statistics on a regular (N-dimensional) grid, visualizations of a billion objects can be done within a second on a modern desktop computer. This is achieved using memory mapping of hdf5 files together with a simple binning algorithm, which are part of a Python library called vaex. This enables efficient exploration or large datasets interactively, making science exploration of large catalogues feasible. Vaex is a Python library and an application, which allows for interactive exploration and visualization. The motivation for developing vaex is the catalogue of the Gaia satellite, however, vaex can also be used on SPH or N-body simulations, any other (future) catalogues such as SDSS, Pan-STARRS, LSST, etc. or other tabular data. The homepage for vaex is http://vaex.astro.rug.nl.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-01-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Astrophysics Data System (ADS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-08-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1987-01-01
Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Quantitative assessment of anthrax vaccine immunogenicity using the dried blood spot matrix.
Schiffer, Jarad M; Maniatis, Panagiotis; Garza, Ilana; Steward-Clark, Evelene; Korman, Lawrence T; Pittman, Phillip R; Mei, Joanne V; Quinn, Conrad P
2013-03-01
The collection, processing and transportation to a testing laboratory of large numbers of clinical samples during an emergency response situation present significant cost and logistical issues. Blood and serum are common clinical samples for diagnosis of disease. Serum preparation requires significant on-site equipment and facilities for immediate processing and cold storage, and significant costs for cold-chain transport to testing facilities. The dried blood spot (DBS) matrix offers an alternative to serum for rapid and efficient sample collection with fewer on-site equipment requirements and considerably lower storage and transport costs. We have developed and validated assay methods for using DBS in the quantitative anti-protective antigen IgG enzyme-linked immunosorbent assay (ELISA), one of the primary assays for assessing immunogenicity of anthrax vaccine and for confirmatory diagnosis of Bacillus anthracis infection in humans. We have also developed and validated high-throughput data analysis software to facilitate data handling for large clinical trials and emergency response. Published by Elsevier Ltd.
A biomedical information system for retrieval and manipulation of NHANES data.
Mukherjee, Sukrit; Martins, David; Norris, Keith C; Jenders, Robert A
2013-01-01
The retrieval and manipulation of data from large public databases like the U.S. National Health and Nutrition Examination Survey (NHANES) may require sophisticated statistical software and significant expertise that may be unavailable in the university setting. In response, we have developed the Data Retrieval And Manipulation System (DReAMS), an automated information system to handle all processes of data extraction and cleaning and then joining different subsets to produce analysis-ready output. The system is a browser-based data warehouse application in which the input data from flat files or operational systems are aggregated in a structured way so that the desired data can be read, recoded, queried and extracted efficiently. The current pilot implementation of the system provides access to a limited amount of NHANES database. We plan to increase the amount of data available through the system in the near future and to extend the techniques to other large databases from CDU archive with a current holding of about 53 databases.
Multitasking the Davidson algorithm for the large, sparse eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umar, V.M.; Fischer, C.F.
1989-01-01
The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range ofmore » matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.« less
Interdisciplinary applications and interpretations of ERTS data within the Susquehanna River basin
NASA Technical Reports Server (NTRS)
Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)
1975-01-01
The author has identified the following significant results. The full potential of high quality data is achieved only with the application of efficient and effective interpretation techniques. An excellent operating system for handling, processing, and interpreting ERTS-1 and other MSS data was achieved. Programs for processing digital data are implemented on a large nondedicated general purpose computer. Significant results were attained in mapping land use, agricultural croplands, forest resources, and vegetative cover. Categories of land use classified and mapped depend upon the geographic location, the detail required, and the types of lands use of interest. Physiographic and structural provinces are spectacularly displayed on ERTS-1 MSS image mosaics. Geologic bedrock structures show up well and formation contacts can sometimes be traced for hundreds of kilometers. Large circular structures and regional features, previously obscured by the detail of higher resolution data, can be seen. Environmental monitoring was performed in three areas: coal strip mining, coal refuse problems, and damage to vegetation caused by insects and pollution.
Papapetrou, E P; Zoumbos, N C; Athanassiadou, A
2005-10-01
Serious unwanted complications provoked by retroviral gene transfer into hematopoietic stem cells (HSCs) have recently raised the need for the development and assessment of alternative gene transfer vectors. Within this context, nonviral gene transfer systems are attracting increasing interest. Their main advantages include low cost, ease of handling and large-scale production, large packaging capacity and, most importantly, biosafety. While nonviral gene transfer into HSCs has been restricted in the past by poor transfection efficiency and transient maintenance, in recent years, biotechnological developments are converting nonviral transfer into a realistic approach for genetic modification of cells of hematopoietic origin. Herein we provide an overview of past accomplishments in the field of nonviral gene transfer into hematopoietic progenitor/stem cells and we point at future challenges. We argue that episomally maintained self-replicating vectors combined with physical methods of delivery show the greatest promise among nonviral gene transfer strategies for the treatment of disorders of the hematopoietic system.
Team building: electronic management-clinical translational research (eM-CTR) systems.
Cecchetti, Alfred A; Parmanto, Bambang; Vecchio, Marcella L; Ahmad, Sjarif; Buch, Shama; Zgheib, Nathalie K; Groark, Stephen J; Vemuganti, Anupama; Romkes, Marjorie; Sciurba, Frank; Donahoe, Michael P; Branch, Robert A
2009-12-01
Classical drug exposure: response studies in clinical pharmacology represent the quintessential prototype for Bench to Bedside-Clinical Translational Research. A fundamental premise of this approach is for a multidisciplinary team of researchers to design and execute complex, in-depth mechanistic studies conducted in relatively small groups of subjects. The infrastructure support for this genre of clinical research is not well-handled by scaling down of infrastructure used for large Phase III clinical trials. We describe a novel, integrated strategy, whose focus is to support and manage a study using an Information Hub, Communication Hub, and Data Hub design. This design is illustrated by an application to a series of varied projects sponsored by Special Clinical Centers of Research in chronic obstructive pulmonary disease at the University of Pittsburgh. In contrast to classical informatics support, it is readily scalable to large studies. Our experience suggests the culture consequences of research group self-empowerment is not only economically efficient but transformative to the research process.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Bolster, Diogo
2017-04-01
We introduce a simple and efficient lattice Boltzmann method for immiscible multiphase flows, capable of handling large density and viscosity contrasts. The model is based on a diffuse-interface phase-field approach. Within this context we propose a new algorithm for specifying the three-phase contact angle on curved boundaries within the framework of structured Cartesian grids. The proposed method has superior computational accuracy compared with the common approach of approximating curved boundaries with stair cases. We test the model by applying it to four benchmark problems: (i) wetting and dewetting of a droplet on a flat surface and (ii) on a cylindrical surface, (iii) multiphase flow past a circular cylinder at an intermediate Reynolds number, and (iv) a droplet falling on hydrophilic and superhydrophobic circular cylinders under differing conditions. Where available, our results show good agreement with analytical solutions and/or existing experimental data, highlighting strengths of this new approach.
Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R
2008-01-01
A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.
49 CFR 1544.405 - Qualifications of screening personnel.
Code of Federal Regulations, 2010 CFR
2010-10-01
... must be able to efficiently and thoroughly manipulate and handle such baggage, containers, cargo, and... the screening equipment monitor the appropriate imaging standard specified in the aircraft operator's...
49 CFR 1544.405 - Qualifications of screening personnel.
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be able to efficiently and thoroughly manipulate and handle such baggage, containers, cargo, and... the screening equipment monitor the appropriate imaging standard specified in the aircraft operator's...
49 CFR 1544.405 - Qualifications of screening personnel.
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be able to efficiently and thoroughly manipulate and handle such baggage, containers, cargo, and... the screening equipment monitor the appropriate imaging standard specified in the aircraft operator's...
49 CFR 1544.405 - Qualifications of screening personnel.
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be able to efficiently and thoroughly manipulate and handle such baggage, containers, cargo, and... the screening equipment monitor the appropriate imaging standard specified in the aircraft operator's...
49 CFR 1544.405 - Qualifications of screening personnel.
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be able to efficiently and thoroughly manipulate and handle such baggage, containers, cargo, and... the screening equipment monitor the appropriate imaging standard specified in the aircraft operator's...
High Efficiency Power Combining of Ka-Band TWTs for High Data Rate Communications
NASA Technical Reports Server (NTRS)
Wintucky, E. G.; Simons, R. N.; Vaden, K. R.; Lesny, G. G.; Glass, J. L.
2006-01-01
Future NASA deep space exploration missions are expected in some cases to require telecommunication systems capable of operating at very high data rates (potentially 1 Gbps or more) for the transmission back to Earth of large volumes of scientific data, which means high frequency transmitters with large bandwidth. Among the Ka band frequencies of interest are the present 500 MHz Deep Space Network (DSN) band of 31.8 to 32.3 GHz and a broader band at 37-38 GHz allocated for space science [1]. The large distances and use of practical antenna sizes dictate the need for high transmitter power of up to 1 kW or more. High electrical efficiency is also a requirement. The approach investigated by NASA GRC is a novel wave guide power combiner architecture based on a hybrid magic-T junction for combining the power output from multiple TWTs [1,2]. This architecture was successfully demonstrated and is capable of both high efficiency (90-95%, depending on frequency) and high data rate transmission (up to 622 Mbps) in a two-way power combiner circuit for two different pairs of Ka band TWTs at two different frequency bands. One pair of TWTs, tested over a frequency range of 29.1 to 29.6 GHz, consisted of two 110-115W TWTs previously used in uplink data transmission evaluation terminals in the NASA Advanced Communications Technology Satellite (ACTS) program [1,2]. The second pair was two 100W TWTs (Boeing 999H) designed for high efficiency operation (greater than 55%) over the DSN frequency band of 31.8 to 32.3 GHz [3]. The presentation will provide a qualitative description of the wave guide circuit, results for power combining and data transmission measurements, and results of computer modeling of the magic-T and alternative hybrid junctions for improvements in efficiency and power handling capability. The power combiner results presented here are relevant not only to NASA deep space exploration missions, but also to other U.S. Government agency programs.
Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.
ERIC Educational Resources Information Center
Simpson, William A.
In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…
NASA Technical Reports Server (NTRS)
Doggett, William R.; Dorsey, John T.; Jones, Thomas C.; King, Bruce D.; Mikulas, Martin M.
2011-01-01
Efficient handling of payloads destined for a planetary surface, such as the moon or mars, requires robust systems to secure the payloads during transport on the ground, in space and on the planetary surface. In addition, mechanisms to release the payloads need to be reliable to ensure successful transfer from one vehicle to another. An efficient payload handling strategy must also consider the devices available to support payload handling. Cranes used for overhead lifting are common to all phases of payload handling on Earth. Similarly, both recent and past studies have demonstrated that devices with comparable functionality will be needed to support lunar outpost operations. A first generation test-bed of a new high performance device that provides the capabilities of both a crane and a robotic manipulator, the Lunar Surface Manipulation System (LSMS), has been designed, built and field tested and is available for use in evaluating a system to secure payloads to transportation vehicles. A payload handling approach must address all phases of payload management including: ground transportation, launch, planetary transfer and installation in the final system. In addition, storage may be required during any phase of operations. Each of these phases requires the payload to be lifted and secured to a vehicle, transported, released and lifted in preparation for the next transportation or storage phase. A critical component of a successful payload handling approach is a latch and associated carrier system. The latch and carrier system should minimize requirements on the: payload, carrier support structure and payload handling devices as well as be able to accommodate a wide range of payload sizes. In addition, the latch should; be small and lightweight, support a method to apply preload, be reusable, integrate into a minimal set of hard-points and have manual interfaces to actuate the latch should a problem occur. A latching system which meets these requirements has been designed and fabricated and will be described in detail. This latching system works in conjunction with a payload handling device such as the LSMS, and the LSMS has been used to test first generation latch and carrier hardware. All tests have been successful during the first phase of operational evaluations. Plans for future tests of first generation latch and carrier hardware with the LSMS are also described.
NASA Technical Reports Server (NTRS)
Doggett, William R.; Dorsey, John T.; Jones, Thomas C.; King, Bruce D.; Mikulas, Martin M.
2010-01-01
Efficient handling of payloads destined for a planetary surface, such as the moon or Mars, requires robust systems to secure the payloads during transport on the ground, in-space and on the planetary surface. In addition, mechanisms to release the payloads need to be reliable to ensure successful transfer from one vehicle to another. An efficient payload handling strategy must also consider the devices available to support payload handling. Cranes used for overhead lifting are common to all phases of payload handling on Earth. Similarly, both recent and past studies have demonstrated that devices with comparable functionality will be needed to support lunar outpost operations. A first generation test-bed of a new high performance device that provides the capabilities of both a crane and a robotic manipulator, the Lunar Surface Manipulation System (LSMS), has been designed, built and field tested and is available for use in evaluating a system to secure payloads to transportation vehicles. National Institute of Aerospace, Hampton Va 23662 A payload handling approach must address all phases of payload management including: ground transportation, launch, planetary transfer and installation in the final system. In addition, storage may be required during any phase of operations. Each of these phases requires the payload to be lifted and secured to a vehicle, transported, released and lifted in preparation for the next transportation or storage phase. A critical component of a successful payload handling approach is a latch and associated carrier system. The latch and carrier system should minimize requirements on the: payload, carrier support structure and payload handling devices as well as be able to accommodate a wide range of payload sizes. In addition, the latch should; be small and lightweight, support a method to apply preload, be reusable, integrate into a minimal set of hard-points and have manual interfaces to actuate the latch should a problem occur. A latching system which meets these requirements has been designed and fabricated and will be described in detail. This latching system works in conjunction with a payload handling device such as the LSMS, and the LSMS has been used to test first generation latch and carrier hardware. All tests have been successful during the first phase of operational evaluations. Plans for future tests of first generation latch and carrier hardware with the LSMS are also described.
Handling Qualities of a Large Civil Tiltrotor in Hover using Translational Rate Command
NASA Technical Reports Server (NTRS)
Malpica, Carlos A.; Theodore, Colin R.; Lawrence, Ben; Lindsey, James; Blanken, Chris
2012-01-01
A Translational Rate Command (TRC) control law has been developed to enable low speed maneuvering of a large civil tiltrotor with minimal pitch changes by means of automatic nacelle angle deflections for longitudinal velocity control. The nacelle actuator bandwidth required to achieve Level 1 handling qualities in hover and the feasibility of additional longitudinal cyclic control to augment low bandwidth nacelle actuation were investigated. A frequency-domain handling qualities criterion characterizing TRC response in terms of bandwidth and phase delay was proposed and validated against a piloted simulation conducted on the NASA-Ames Vertical Motion Simulator. Seven experimental test pilots completed evaluations in the ADS-33E-PRF Hover Mission Task Element (MTE) for a matrix of nacelle actuator bandwidths, equivalent rise times and control response sensitivities, and longitudinal cyclic control allocations. Evaluated against this task, longitudinal phase delay shows the Level 1 boundary is around 0.4 0.5 s. Accordingly, Level 1 handling qualities were achieved either with a nacelle actuator bandwidth greater than 4 rad/s, or by employing longitudinal cyclic control to augment low bandwidth nacelle actuation.
Optimization of doxorubicin loading for superabsorbent polymer microspheres: in vitro analysis.
Liu, David M; Kos, Sebastian; Buczkowski, Andrzej; Kee, Stephen; Munk, Peter L; Klass, Darren; Wasan, Ellen
2012-04-01
This study was designed to establish the ability of super-absorbent polymer microspheres (SAP) to actively uptake doxorubicin and to establish the proof of principle of SAP's ability to phase transfer doxorubicin onto the polymer matrix and to elute into buffer with a loading method that optimizes physical handling and elution characteristics. Phase I: 50-100 μm SAP subject to various prehydration methods (normal saline 10 cc, hypertonic saline 4 cc, iodinated contrast 10 cc) or left in their dry state, and combined with 50 mg of clinical grade lyophilized doxorubicin reconstituted with various methods (normal saline 10 cc and 25 cc, sterile water 4 cc, iodinated contrast 5 cc) were placed in buffer and assessed based on loading, handling, and elution utilizing high-performance liquid chromatography (HPLC). Phase II: top two performing methods were subject to loading of doxorubicin (50, 75, 100 mg) in a single bolus (group A) or as a serial loading method (group B) followed by measurement of loading vs. time and elution vs. time. Phase I revealed the most effective loading mechanisms and easiest handling to be dry (group A) vs. normal saline prehydrated (group B) SAP with normal saline reconstituted doxorubicin (10 mg/mL) with loading efficiencies of 83.1% and 88.4%. Phase II results revealed unstable behavior of SAP with 100 mg of doxorubicin and similar loading/elution profiles of dry and prehydrated SAP, with superior handling characteristics of group B SAP at 50 and 75 mg. SAP demonstrates the ability to load and bulk phase transfer doxorubicin at 50 and 75 mg with ease of handling and optimal efficiency through dry loading of SAP.
Optimization of Doxorubicin Loading for Superabsorbent Polymer Microspheres: in vitro Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, David M., E-mail: dave.liu@vch.ca; Kos, Sebastian; Buczkowski, Andrzej
2012-04-15
Purpose: This study was designed to establish the ability of super-absorbent polymer microspheres (SAP) to actively uptake doxorubicin and to establish the proof of principle of SAP's ability to phase transfer doxorubicin onto the polymer matrix and to elute into buffer with a loading method that optimizes physical handling and elution characteristics. Methods: Phase I: 50-100 {mu}m SAP subject to various prehydration methods (normal saline 10 cc, hypertonic saline 4 cc, iodinated contrast 10 cc) or left in their dry state, and combined with 50 mg of clinical grade lyophilized doxorubicin reconstituted with various methods (normal saline 10 cc andmore » 25 cc, sterile water 4 cc, iodinated contrast 5 cc) were placed in buffer and assessed based on loading, handling, and elution utilizing high-performance liquid chromatography (HPLC). Phase II: top two performing methods were subject to loading of doxorubicin (50, 75, 100 mg) in a single bolus (group A) or as a serial loading method (group B) followed by measurement of loading vs. time and elution vs. time. Results: Phase I revealed the most effective loading mechanisms and easiest handling to be dry (group A) vs. normal saline prehydrated (group B) SAP with normal saline reconstituted doxorubicin (10 mg/mL) with loading efficiencies of 83.1% and 88.4%. Phase II results revealed unstable behavior of SAP with 100 mg of doxorubicin and similar loading/elution profiles of dry and prehydrated SAP, with superior handling characteristics of group B SAP at 50 and 75 mg. Conclusions: SAP demonstrates the ability to load and bulk phase transfer doxorubicin at 50 and 75 mg with ease of handling and optimal efficiency through dry loading of SAP.« less
SaaS Platform for Time Series Data Handling
NASA Astrophysics Data System (ADS)
Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail
2018-02-01
The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.
Modal control theory and application to aircraft lateral handling qualities design
NASA Technical Reports Server (NTRS)
Srinathkumar, S.
1978-01-01
A multivariable synthesis procedure based on eigenvalue/eigenvector assignment is reviewed and is employed to develop a systematic design procedure to meet the lateral handling qualities design objectives of a fighter aircraft over a wide range of flight conditions. The closed loop modal characterization developed provides significant insight into the design process and plays a pivotal role in the synthesis of robust feedback systems. The simplicity of the synthesis algorithm yields an efficient computer aided interactive design tool for flight control system synthesis.
GSKY: A scalable distributed geospatial data server on the cloud
NASA Astrophysics Data System (ADS)
Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben
2017-04-01
Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as an independent service. An indexing service crawls data collections either locally or remotely by extracting, storing and indexing all spatio-temporal metadata associated with each individual record. GSKY provides the user with the ability of specifying how ingested data should be aggregated, transformed and presented. It presents an OGC standards-compliant interface, allowing ready accessibility for users of the data via Web Map Services (WMS), Web Processing Services (WPS) or raw data arrays using Web Coverage Services (WCS). The presentation will show some cases where we have used this new capability to provide a significant improvement over previous approaches.
Code of Federal Regulations, 2011 CFR
2011-01-01
... efficient manner. High risk personal property will be managed throughout its life cycle so as to protect... transferred or disposed unless it receives a high risk assessment and is handled accordingly. ...
An application of object-oriented knowledge representation to engineering expert systems
NASA Technical Reports Server (NTRS)
Logie, D. S.; Kamil, H.; Umaretiya, J. R.
1990-01-01
The paper describes an object-oriented knowledge representation and its application to engineering expert systems. The object-oriented approach promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects and organized by defining relationships between the objects. An Object Representation Language (ORL) was implemented as a tool for building and manipulating the object base. Rule-based knowledge representation is then used to simulate engineering design reasoning. Using a common object base, very large expert systems can be developed, comprised of small, individually processed, rule sets. The integration of these two schemes makes it easier to develop practical engineering expert systems. The general approach to applying this technology to the domain of the finite element analysis, design, and optimization of aerospace structures is discussed.
NASA Astrophysics Data System (ADS)
Baba, S.; Sakai, T.; Sawada, K.; Kubota, C.; Wada, Y.; Shinmoto, Y.; Ohta, H.; Asano, H.; Kawanami, O.; Suzuki, K.; Imai, R.; Kawasaki, H.; Fujii, K.; Takayanagi, M.; Yoda, S.
2011-12-01
Boiling is one of the efficient modes of heat transfer due to phase change, and is regarded as promising means to be applied for the thermal management systems handling a large amount of waste heat under high heat flux. However, gravity effects on the two-phase flow phenomena and corresponding heat transfer characteristics have not been clarified in detail. The experiments onboard Japanese Experiment Module "KIBO" in International Space Station on boiling two-phase flow under microgravity conditions are proposed to clarify both of heat transfer and flow characteristics under microgravity conditions. To verify the feasibility of ISS experiments on boiling two-phase flow, the Bread Board Model is assembled and its performance and the function of components installed in a test loop are examined.
The GeoClaw software for depth-averaged flows with adaptive refinement
Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.
2011-01-01
Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.
Module Embedded Micro-inverter Smart Grid Ready Residential Solar Electric System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agamy, Mohammed
The “Module Embedded Micro-inverter Smart Grid Ready Residential Solar Electric System” program is focused on developing innovative concepts for residential photovoltaic (PV) systems with the following objectives: to create an Innovative micro-inverter topology that reduces the cost from the best in class micro-inverter and provides high efficiency (>96% CEC - California Energy Commission), and 25+ year warranty, as well as reactive power support; integrate micro-inverter and PV module to reduce system price by at least $0.25/W through a) accentuating dual use of the module metal frame as a large area heat spreader reducing operating temperature, and b) eliminating redundant wiringmore » and connectors; and create micro-inverter controller handles smart grid and safety functions to simplify implementation and reduce cost.« less
A globally well-posed finite element algorithm for aerodynamics applications
NASA Technical Reports Server (NTRS)
Iannelli, G. S.; Baker, A. J.
1991-01-01
A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.
Advancements in RNASeqGUI towards a Reproducible Analysis of RNA-Seq Experiments
Russo, Francesco; Righelli, Dario
2016-01-01
We present the advancements and novelties recently introduced in RNASeqGUI, a graphical user interface that helps biologists to handle and analyse large data collected in RNA-Seq experiments. This work focuses on the concept of reproducible research and shows how it has been incorporated in RNASeqGUI to provide reproducible (computational) results. The novel version of RNASeqGUI combines graphical interfaces with tools for reproducible research, such as literate statistical programming, human readable report, parallel executions, caching, and interactive and web-explorable tables of results. These features allow the user to analyse big datasets in a fast, efficient, and reproducible way. Moreover, this paper represents a proof of concept, showing a simple way to develop computational tools for Life Science in the spirit of reproducible research. PMID:26977414
Colleau, Jean-Jacques; Palhière, Isabelle; Rodríguez-Ramilo, Silvia T; Legarra, Andres
2017-12-01
Pedigree-based management of genetic diversity in populations, e.g., using optimal contributions, involves computation of the [Formula: see text] type yielding elements (relationships) or functions (usually averages) of relationship matrices. For pedigree-based relationships [Formula: see text], a very efficient method exists. When all the individuals of interest are genotyped, genomic management can be addressed using the genomic relationship matrix [Formula: see text]; however, to date, the computational problem of efficiently computing [Formula: see text] has not been well studied. When some individuals of interest are not genotyped, genomic management should consider the relationship matrix [Formula: see text] that combines genotyped and ungenotyped individuals; however, direct computation of [Formula: see text] is computationally very demanding, because construction of a possibly huge matrix is required. Our work presents efficient ways of computing [Formula: see text] and [Formula: see text], with applications on real data from dairy sheep and dairy goat breeding schemes. For genomic relationships, an efficient indirect computation with quadratic instead of cubic cost is [Formula: see text], where Z is a matrix relating animals to genotypes. For the relationship matrix [Formula: see text], we propose an indirect method based on the difference between vectors [Formula: see text], which involves computation of [Formula: see text] and of products such as [Formula: see text] and [Formula: see text], where [Formula: see text] is a working vector derived from [Formula: see text]. The latter computation is the most demanding but can be done using sparse Cholesky decompositions of matrix [Formula: see text], which allows handling very large genomic and pedigree data files. Studies based on simulations reported in the literature show that the trends of average relationships in [Formula: see text] and [Formula: see text] differ as genomic selection proceeds. When selection is based on genomic relationships but management is based on pedigree data, the true genetic diversity is overestimated. However, our tests on real data from sheep and goat obtained before genomic selection started do not show this. We present efficient methods to compute elements and statistics of the genomic relationships [Formula: see text] and of matrix [Formula: see text] that combines ungenotyped and genotyped individuals. These methods should be useful to monitor and handle genomic diversity.
Occupational health and safety aspects of animal handling in dairy production.
Lindahl, Cecilia; Lundqvist, Peter; Hagevoort, G Robert; Lunner Kolstrup, Christina; Douphrate, David I; Pinzke, Stefan; Grandin, Temple
2013-01-01
Livestock handling in dairy production is associated with a number of health and safety issues. A large number of fatal and nonfatal injuries still occur when handling livestock. The many animal handling tasks on a dairy farm include moving cattle between different locations, vaccination, administration of medication, hoof care, artificial insemination, ear tagging, milking, and loading onto trucks. There are particular problems with bulls, which continue to cause considerable numbers of injuries and fatalities in dairy production. In order to reduce the number of injuries during animal handling on dairy farms, it is important to understand the key factors in human-animal interactions. These include handler attitudes and behavior, animal behavior, and fear in cows. Care when in close proximity to the animal is the key for safe handling, including knowledge of the flight zone, and use of the right types of tools and suitable restraint equipment. Thus, in order to create safe working conditions during livestock handling, it is important to provide handlers with adequate training and to establish sound safety management procedures on the farm.
45 CFR 1310.17 - Driver and bus monitor training.
Code of Federal Regulations, 2010 CFR
2010-10-01
... safe and efficient manner; (2) safely run a fixed route, including loading and unloading children... first aid in case of injury; (4) handle emergency situations, including vehicle evacuation procedures...
NASA Technical Reports Server (NTRS)
Billingsley, F.
1982-01-01
Concerns are expressed about the data handling aspects of system design and about enabling technology for data handling and data analysis. The status, contributing factors, critical issues, and recommendations for investigations are listed for data handling, rectification and registration, and information extraction. Potential supports to individual P.I., research tasks, systematic data system design, and to system operation. The need for an airborne spectrometer class instrument for fundamental research in high spectral and spatial resolution is indicated. Geographic information system formatting and labelling techniques, very large scale integration, and methods for providing multitype data sets must also be developed.
Handling qualities criteria for the space shuttle orbiter during the terminal phase of flight
NASA Technical Reports Server (NTRS)
Stapleford, R. L.; Klein, R. H.; Hob, R. H.
1972-01-01
It was found that large portions of the military handling qualities specification are directly applicable. However a number of additional and substitute criteria are recommended for areas not covered or inadequately covered in the military specification. Supporting pilot/vehicle analyses and simulation experiments were conducted and are described. Results are also presented of analytical and simulator evaluations of three specific interim Orbiter designs which provided a test of the proposed handling qualities criteria. The correlations between the analytical and experimental evaluations were generally excellent.
Comfortable, high-efficiency heat pump with desiccant-coated, water-sorbing heat exchangers
Tu, Y. D.; Wang, R. Z.; Ge, T. S.; Zheng, X.
2017-01-01
Comfortable, efficient, and affordable heating, ventilation, and air conditioning systems in buildings are highly desirable due to the demands of energy efficiency and environmental friendliness. Traditional vapor-compression air conditioners exhibit a lower coefficient of performance (COP) (typically 2.8–3.8) owing to the cooling-based dehumidification methods that handle both sensible and latent loads together. Temperature- and humidity-independent control or desiccant systems have been proposed to overcome these challenges; however, the COP of current desiccant systems is quite small and additional heat sources are usually needed. Here, we report on a desiccant-enhanced, direct expansion heat pump based on a water-sorbing heat exchanger with a desiccant coating that exhibits an ultrahigh COP value of more than 7 without sacrificing any comfort or compactness. The pump’s efficiency is doubled compared to that of pumps currently used in conventional room air conditioners, which is a revolutionary HVAC breakthrough. Our proposed water-sorbing heat exchanger can independently handle sensible and latent loads at the same time. The desiccants adsorb moisture almost isothermally and can be regenerated by condensation heat. This new approach opens up the possibility of achieving ultrahigh efficiency for a broad range of temperature- and humidity-control applications. PMID:28079171
Effect of bit wear on hammer drill handle vibration and productivity.
Antonucci, Andrea; Barr, Alan; Martin, Bernard; Rempel, David
2017-08-01
The use of large electric hammer drills exposes construction workers to high levels of hand vibration that may lead to hand-arm vibration syndrome and other musculoskeletal disorders. The aim of this laboratory study was to investigate the effect of bit wear on drill handle vibration and drilling productivity (e.g., drilling time per hole). A laboratory test bench system was used with an 8.3 kg electric hammer drill and 1.9 cm concrete bit (a typical drill and bit used in commercial construction). The system automatically advanced the active drill into aged concrete block under feed force control to a depth of 7.6 cm while handle vibration was measured according to ISO standards (ISO 5349 and 28927). Bits were worn to 4 levels by consecutive hole drilling to 4 cumulative drilling depths: 0, 1,900, 5,700, and 7,600 cm. Z-axis handle vibration increased significantly (p<0.05) from 4.8 to 5.1 m/s 2 (ISO weighted) and from 42.7-47.6 m/s 2 (unweighted) when comparing a new bit to a bit worn to 1,900 cm of cumulative drilling depth. Handle vibration did not increase further with bits worn more than 1900 cm of cumulative drilling depth. Neither x- nor y-axis handle vibration was effected by bit wear. The time to drill a hole increased by 58% for the bit with 5,700 cm of cumulative drilling depth compared to a new bit. Bit wear led to a small but significant increase in both ISO weighted and unweighted z-axis handle vibration. Perhaps more important, bit wear had a large effect on productivity. The effect on productivity will influence a worker's allowable daily drilling time if exposure to drill handle vibration is near the ACGIH Threshold Limit Value. [1] Construction contractors should implement a bit replacement program based on these findings.
User interface for ground-water modeling: Arcview extension
Tsou, Ming‐shu; Whittemore, Donald O.
2001-01-01
Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.
Building a mechanistic understanding of predation with GPS-based movement data.
Merrill, Evelyn; Sand, Håkan; Zimmermann, Barbara; McPhee, Heather; Webb, Nathan; Hebblewhite, Mark; Wabakken, Petter; Frair, Jacqueline L
2010-07-27
Quantifying kill rates and sources of variation in kill rates remains an important challenge in linking predators to their prey. We address current approaches to using global positioning system (GPS)-based movement data for quantifying key predation components of large carnivores. We review approaches to identify kill sites from GPS movement data as a means to estimate kill rates and address advantages of using GPS-based data over past approaches. Despite considerable progress, modelling the probability that a cluster of GPS points is a kill site is no substitute for field visits, but can guide our field efforts. Once kill sites are identified, time spent at a kill site (handling time) and time between kills (killing time) can be determined. We show how statistical models can be used to investigate the influence of factors such as animal characteristics (e.g. age, sex, group size) and landscape features on either handling time or killing efficiency. If we know the prey densities along paths to a kill, we can quantify the 'attack success' parameter in functional response models directly. Problems remain in incorporating the behavioural complexity derived from GPS movement paths into functional response models, particularly in multi-prey systems, but we believe that exploring the details of GPS movement data has put us on the right path.
Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.
2007-01-01
Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.
Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis
2013-01-01
We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331
Novel Long Stroke Reciprocating Compressor for Energy Efficient Jaggery Making
NASA Astrophysics Data System (ADS)
Rane, M. V.; Uphade, D. B.
2017-08-01
Novel Long Stroke Reciprocating Compressor is analysed for jaggery making while avoiding burning of bagasse for concentrating juice. Heat of evaporated water vapour along with small compressor work is recycled to enable boiling of juice. Condensate formed during heating of juice is pure water, as oil-less compressor is used. Superheat of compressor is suppressed by flow of superheated vapours through condensate. It limits heating surface temperature and avoids caramelization of sugar. Thereby improves quality of jaggery and eliminates need to use chemicals for colour improvement. Stroke to bore ratio is 0.6 to 1.2 in conventional reciprocating drives. Long stroke in reciprocating compressors enhances heat dissipation to surrounding by providing large surface area and increases isentropic efficiency by reducing compressor outlet temperature. Longer stroke increases inlet and exit valve operation timings, which reduces inertial effects substantially. Thereby allowing use of sturdier valves. This enables handling liquid along with vapour in compressors. Thereby supressing the superheat and reducing compressor power input. Longer stroke increases stroke to clearance ratios which increases volumetric efficiency and ability of compressor to compress through higher pressure ratios efficiently. Stress-strain simulation is performed in SolidWorks for gear drive. Long Stroke Reciprocating Compressor is developed at Heat Pump Laboratory, stroke/bore 292 mm/32 mm. It is operated and tested successfully at different speeds for operational stability of components. Theoretical volumetric efficiency is 93.9% at pressure ratio 2.0. Specific energy consumption is 108.3 kWhe/m3 separated water, considering free run power.
Development and experimental evaluation of an alarm concept for an integrated surgical workstation.
Zeißig, Eva-Maria; Janß, Armin; Dell'Anna-Pudlik, Jasmin; Ziefle, Martina; Radermacher, Klaus
2016-04-01
Alarm conditions of the technical equipment in operating rooms represent a prevalent cause for interruptions of surgeons and scrub nurses, resulting in an increase of workload and potential reduction of patient safety. In this work, an alarm concept for an integrated operating room system based on open communication standards is developed and tested. In a laboratory experiment, the reactions of surgeons were analysed, comparing the displaying of alarms on an integrated workstation and on single devices: disruptive effects of alarm handling on primary task (ratings of perceived distraction, resumption lag, deterioration of speed, accuracy, and prospective memory), efficiency and effectiveness of identification of alarms, as well as perceived workload were included. The identification of the alarm cause is significantly more efficient and effective with the integrated alarm concept. Moreover, a slightly lower deterioration of performance of the primary task due to the interruption of alarm handling was observed. Displaying alarms on an integrated workstation supports alarm handling and consequently reduces disruptive effects on the primary task. The findings show that even small changes can reduce workload in a complex work environment like the operating room, resulting in improved patient safety.
Efficiency Evaluation of Handling of Geologic-Geophysical Information by Means of Computer Systems
NASA Astrophysics Data System (ADS)
Nuriyahmetova, S. M.; Demyanova, O. V.; Zabirova, L. M.; Gataullin, I. I.; Fathutdinova, O. A.; Kaptelinina, E. A.
2018-05-01
Development of oil and gas resources, considering difficult geological, geographical and economic conditions, requires considerable finance costs; therefore their careful reasons, application of the most perspective directions and modern technologies from the point of view of cost efficiency of planned activities are necessary. For ensuring high precision of regional and local forecasts and modeling of reservoirs of fields of hydrocarbonic raw materials, it is necessary to analyze huge arrays of the distributed information which is constantly changing spatial. The solution of this task requires application of modern remote methods of a research of the perspective oil-and-gas territories, complex use of materials remote, nondestructive the environment of geologic-geophysical and space methods of sounding of Earth and the most perfect technologies of their handling. In the article, the authors considered experience of handling of geologic-geophysical information by means of computer systems by the Russian and foreign companies. Conclusions that the multidimensional analysis of geologicgeophysical information space, effective planning and monitoring of exploration works requires broad use of geoinformation technologies as one of the most perspective directions in achievement of high profitability of an oil and gas industry are drawn.
NASA Technical Reports Server (NTRS)
Lew, Jae Young; Book, Wayne J.
1991-01-01
Remote handling in nuclear waste management requires a robotic system with precise motion as well as a large workspace. The concept of a small arm mounted on the end of a large arm may satisfy such needs. However, the performance of such a serial configuration lacks payload capacity which is a crucial factor for handling a massive object. Also, this configuration induces more flexibility on the structure. To overcome these problems, the topology of bracing the tip of the small arm (not the large arm) and having an end effector in the middle of the chain is proposed in this paper. Also, control of these cooperating disparate manipulators is accomplished in computer simulations. Thus, this robotic system can have the accuracy of the small arm, and at the same time, it can have the payload capacity and large workspace of the large arm.
Regulation of a lightweight high efficiency capacitator diode voltage multiplier dc-dc converter
NASA Technical Reports Server (NTRS)
Harrigill, W. T., Jr.; Myers, I. T.
1976-01-01
A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffey, D. E.
2002-02-28
High Efficiency Particulate Air filtration is an essential component of the containment and ventilation systems supporting the research and development activities at the Oak Ridge National Laboratory. High Efficiency Particulate Air filters range in size from 7.6cm (3 inch) by 10.2 cm (4 inch) cylindrical shape filters to filter array assemblies up to 2.1 m (7 feet) high by 1.5 m (5 feet) wide. Spent filters are grouped by contaminates trapped in the filter media and become one of the components in the respective waste stream. Waste minimization and pollution prevention efforts are applied for both radiological and non-radiological applications.more » Radiological applications include laboratory hoods, glove boxes, and hot cells. High Efficiency Particulate Air filters also are generated from intake or pre-filtering applications, decontamination activities, and asbestos abatement applications. The disposal avenues include sanitary/industrial waste, Resource Conservation and Recovery Act and Toxic Substance Control Act, regulated waste, solid low-level waste, contact handled transuranic, and remote handled transuranic waste. This paper discusses characterization and operational experiences associated with the disposal of the spent filters across multiple applications.« less
An Incremental Weighted Least Squares Approach to Surface Lights Fields
NASA Astrophysics Data System (ADS)
Coombe, Greg; Lastra, Anselmo
An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.
Organizing Books and Authors by Multilayer SOM.
Zhang, Haijun; Chow, Tommy W S; Wu, Q M Jonathan
2016-12-01
This paper introduces a new framework for the organization of electronic books (e-books) and their corresponding authors using a multilayer self-organizing map (MLSOM). An author is modeled by a rich tree-structured representation, and an MLSOM-based system is used as an efficient solution to the organizational problem of structured data. The tree-structured representation formulates author features in a hierarchy of author biography, books, pages, and paragraphs. To efficiently tackle the tree-structured representation, we used an MLSOM algorithm that serves as a clustering technique to handle e-books and their corresponding authors. A book and author recommender system is then implemented using the proposed framework. The effectiveness of our approach was examined in a large-scale data set containing 3868 authors along with the 10500 e-books that they wrote. We also provided visualization results of MLSOM for revealing the relevance patterns hidden from presented author clusters. The experimental results corroborate that the proposed method outperforms other content-based models (e.g., rate adapting poisson, latent Dirichlet allocation, probabilistic latent semantic indexing, and so on) and offers a promising solution to book recommendation, author recommendation, and visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wegener, Dirk; Kluth, Thomas
2012-07-01
During maintenance of nuclear power plants, and during their decommissioning period, a large quantity of radioactive metallic waste will accrue. On the other hand the capacity for final disposal of radioactive waste in Germany is limited as well as that in the US. That is why all procedures related to this topic should be handled with a maximum of efficiency. The German model of consistent recycling of the radioactive metal scrap within the nuclear industry therefore also offers high capabilities for facilities in the US. The paper gives a compact overview of the impressive results of melting treatment, the currentmore » potential and further developments. Thousands of cubic metres of final disposal capacity have been saved. The highest level of efficiency and safety by combining general surface decontamination by blasting and nuclide specific decontamination by melting associated with the typical effects of homogenization. An established process - nationally and internationally recognized. Excellent connection between economy and ecology. (authors)« less
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
Hanson, Jennifer A; Hughes, Susan M; Liu, Pei
2015-12-01
Unsafe food handling behaviors are common among consumers, and, given the venue, individuals attending a tailgating event may be at risk for foodborne illness. The objective of this study was to measure the association between Health Belief Model variables and self-reported usual food handling behaviors in a convenience sample of men and women at a tailgate event. Participants (n = 128) completed validated subscales for self-reported food handling behaviors (i.e., cross-contamination, sanitation), perceived threat of foodborne illness (i.e., perceived severity, perceived susceptibility), and safe food handling cues to action (i.e., media cues, educational cues). Perceived severity of foodborne illness was associated with safer behaviors related to sanitation (r = 0.40; P < 0.001) and cross-contamination (r = 0.33; P = 0.001). Perceived severity of foodborne illness was also associated with exposure to safe food handling media cues (r = 0.20; P = 0.027) but not with safe food handling educational cues. A large proportion of participants reported that they never or seldom (i) read newspaper or magazine articles about foodborne illness (65.6%); (ii) read brochures about safe ways to handle food (61.7%); (iii) see store displays that explain ways to handle food (51.6%); or (iv) read the "safe handling instructions" on packages of raw meat and poultry (46.9%). Perceived severity of foodborne illness was positively related to both dimensions of safe food handling as well as with safe food handling media cues. Except for the weak correlation between media cues and perceived severity, the relationships between safe food handling cues and perceived threat, as well as between safe food handling cues and behaviors, were nonsignificant. This finding may be due, in part, to the participants' overall low exposure to safe food handling cues. The overall results of this study reinforce the postulate that perceived severity of foodborne illness may influence food handling behaviors.
Automatic liquid handling for life science: a critical review of the current state of the art.
Kong, Fanwei; Yuan, Liang; Zheng, Yuan F; Chen, Weidong
2012-06-01
Liquid handling plays a pivotal role in life science laboratories. In experiments such as gene sequencing, protein crystallization, antibody testing, and drug screening, liquid biosamples frequently must be transferred between containers of varying sizes and/or dispensed onto substrates of varying types. The sample volumes are usually small, at the micro- or nanoliter level, and the number of transferred samples can be huge when investigating large-scope combinatorial conditions. Under these conditions, liquid handling by hand is tedious, time-consuming, and impractical. Consequently, there is a strong demand for automated liquid-handling methods such as sensor-integrated robotic systems. In this article, we survey the current state of the art in automatic liquid handling, including technologies developed by both industry and research institutions. We focus on methods for dealing with small volumes at high throughput and point out challenges for future advancements.
Evaluation in industry of a draft code of practice for manual handling.
Ashby, Liz; Tappin, David; Bentley, Tim
2004-05-01
This paper reports findings from a study which evaluated the draft New Zealand Code of Practice for Manual Handling. The evaluation assessed the ease of use, applicability and validity of the Code and in particular the associated manual handling hazard assessment tools, within New Zealand industry. The Code was studied in a sample of eight companies from four sectors of industry. Subjective feedback and objective findings indicated that the Code was useful, applicable and informative. The manual handling hazard assessment tools incorporated in the Code could be adequately applied by most users, with risk assessment outcomes largely consistent with the findings of researchers using more specific ergonomics methodologies. However, some changes were recommended to the risk assessment tools to improve usability and validity. The evaluation concluded that both the Code and the tools within it would benefit from simplification, improved typography and layout, and industry-specific information on manual handling hazards.
A novel stone retrieval basket for more efficient lithotripsy procedures.
Salimi, N; Mahajan, A; Don, J; Schwartz, B
2009-01-01
This paper presents the development of an improved stone retrieval device that uses a newly designed cage of Nitinol wires encompassing a mesh basket made of a material that is laser resistant. Current methods to extract large stones involve imaging, using a laser to fragment the stones and then using existing cage-like baskets to trap the fragments individually and extracting them one at a time. These procedures are tedious, and may result in leaving some fragments behind that can reform causing the need for another procedure. The device presented in this paper will have a mesh-like sack which will consist of a laser resistant material of polytetrafluoroethylene (PTFE) enclosed within a newly designed Nitinol cage. Two alternate designs are provided for the cage in this paper. The handle of the device is revised to allow for a 3 Fr (1 mm) opening such that a laser's fiber optic cable can enter the device. Using this device a laser can be used to fragment the stone, and all the fragments are retained in the basket in both the design options. The basket can then be retracted allowing for the retrieval of all the fragments in one shot. The stone retrieval basket presented in this paper will significantly improve the efficiency and effectiveness of lithotripsy procedures for removal of large kidney and biliary tract stones.
Schäfer, Christian; Schmidt, Alexander H; Sauter, Jürgen
2017-05-30
Knowledge of HLA haplotypes is helpful in many settings as disease association studies, population genetics, or hematopoietic stem cell transplantation. Regarding the recruitment of unrelated hematopoietic stem cell donors, HLA haplotype frequencies of specific populations are used to optimize both donor searches for individual patients and strategic donor registry planning. However, the estimation of haplotype frequencies from HLA genotyping data is challenged by the large amount of genotype data, the complex HLA nomenclature, and the heterogeneous and ambiguous nature of typing records. To meet these challenges, we have developed the open-source software Hapl-o-Mat. It estimates haplotype frequencies from population data including an arbitrary number of loci using an expectation-maximization algorithm. Its key features are the processing of different HLA typing resolutions within a given population sample and the handling of ambiguities recorded via multiple allele codes or genotype list strings. Implemented in C++, Hapl-o-Mat facilitates efficient haplotype frequency estimation from large amounts of genotype data. We demonstrate its accuracy and performance on the basis of artificial and real genotype data. Hapl-o-Mat is a versatile and efficient software for HLA haplotype frequency estimation. Its capability of processing various forms of HLA genotype data allows for a straightforward haplotype frequency estimation from typing records usually found in stem cell donor registries.
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
Preliminary study of a large span-distributed-load flying-wing cargo airplane concept
NASA Technical Reports Server (NTRS)
Jernell, L. S.
1978-01-01
An aircraft capable of transporting containerized cargo over intercontinental distances is analyzed. The specifications for payload weight, density, and dimensions in essence configure the wing and establish unusually low values of wing loading and aspect ratio. The structural weight comprises only about 18 percent of the design maximum gross weight. Although the geometric aspect ratio is 4.53, the winglet effect of the wing-tip-mounted vertical tails, increase the effective aspect ratio to approximately 7.9. Sufficient control power to handle the large rolling moment of inertia dictates a relatively high minimum approach velocity of 315 km/hr (170 knots). The airplane has acceptable spiral, Dutch roll, and roll-damping modes. A hardened stability augmentation system is required. The most significant noise source is that of the airframe. However, for both take-off and approach, the levels are below the FAR-36 limit of 108 db. The design mission fuel efficiency is approximately 50 percent greater than that of the most advanced, currently operational, large freighter aircraft. The direct operating cost is significantly lower than that of current freighters, the advantage increasing as fuel price increases.
Massively parallel support for a case-based planning system
NASA Technical Reports Server (NTRS)
Kettler, Brian P.; Hendler, James A.; Anderson, William A.
1993-01-01
Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.
Preliminary study of a large span-distributed-load flying-wing cargo airplane concept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernell, L.S.
1978-05-01
An aircraft capable of transporting containerized cargo over intercontinental distances is analyzed. The specifications for payload weight, density, and dimensions in essence configure the wing and establish unusually low values of wing loading and aspect ratio. The structural weight comprises only about 18 percent of the design maximum gross weight. Although the geometric aspect ratio is 4.53, the winglet effect of the wing-tip-mounted vertical tails, increase the effective aspect ratio to approximately 7.9. Sufficient control power to handle the large rolling moment of inertia dictates a relatively high minimum approach velocity of 315 km/hr (170 knots). The airplane has acceptablemore » spiral, Dutch roll, and roll-damping modes. A hardened stability augmentation system is required. The most significant noise source is that of the airframe. However, for both take-off and approach, the levels are below the FAR-36 limit of 108 db. The design mission fuel efficiency is approximately 50 percent greater than that of the most advanced, currently operational, large freighter aircraft. The direct operating cost is significantly lower than that of current freighters, the advantage increasing as fuel price increases.« less
The Effect of Yaw Coupling in Turning Maneuvers of Large Transport Aircraft
NASA Technical Reports Server (NTRS)
McNeill, Walter E.; Innis, Robert C.
1965-01-01
A study has been made, using a piloted moving simulator, of the effects of the yaw-coupling parameters N(sub p) and N(sub delta(sub a) on the lateral-directional handling qualities of a large transport airplane at landing-approach airspeed. It is shown that the desirable combinations of these parameters tend to be more proverse when compared with values typical of current aircraft. Results of flight tests in a large variable-stability jet transport showed trends which were similar to those of the simulator data. Areas of minor disagreement, which were traced to differences in airplane geometry, indicate that pilot consciousness of side acceleration forces can be an important factor in handling qualities of future long-nosed transport aircraft.
Monroe, J Grey; Allen, Zachariah A; Tanger, Paul; Mullen, Jack L; Lovell, John T; Moyers, Brook T; Whitley, Darrell; McKay, John K
2017-01-01
Recent advances in nucleic acid sequencing technologies have led to a dramatic increase in the number of markers available to generate genetic linkage maps. This increased marker density can be used to improve genome assemblies as well as add much needed resolution for loci controlling variation in ecologically and agriculturally important traits. However, traditional genetic map construction methods from these large marker datasets can be computationally prohibitive and highly error prone. We present TSPmap , a method which implements both approximate and exact Traveling Salesperson Problem solvers to generate linkage maps. We demonstrate that for datasets with large numbers of genomic markers (e.g. 10,000) and in multiple population types generated from inbred parents, TSPmap can rapidly produce high quality linkage maps with low sensitivity to missing and erroneous genotyping data compared to two other benchmark methods, JoinMap and MSTmap . TSPmap is open source and freely available as an R package. With the advancement of low cost sequencing technologies, the number of markers used in the generation of genetic maps is expected to continue to rise. TSPmap will be a useful tool to handle such large datasets into the future, quickly producing high quality maps using a large number of genomic markers.
NASA Technical Reports Server (NTRS)
White, D. R.
1976-01-01
A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential.
Tool for cutting insulation from electrical cables
Harless, Charles E.; Taylor, Ward G.
1978-01-01
This invention is an efficient hand tool for precisely slitting the sheath of insulation on an electrical cable--e.g., a cable two inches in diameter--in a manner facilitating subsequent peeling or stripping of the insulation. The tool includes a rigid frame which is slidably fitted on an end section of the cable. The frame carries a rigidly affixed handle and an opposed, elongated blade-and-handle assembly. The blade-and-handle assembly is pivotally supported by a bracket which is slidably mounted on the frame for movement toward and away from the cable, thus providing an adjustment for the depth of cut. The blade-and-handle assembly is mountable to the bracket in two pivotable positions. With the assembly mounted in the first position, the tool is turned about the cable to slit the insulation circumferentially. With the assembly mounted in the second position, the tool is drawn along the cable to slit the insulation axially. When cut both circumferentially and axially, the insulation can easily be peeled from the cable.
Ahn, Jin Hwan; Oh, Irvin
2004-09-01
Arthroscopic resection of irreparable bucket-handle tears of the medial meniscus is a commonly performed procedure. Adequate visualization of the posterior horn of the medial meniscus can be a challenging task with the conventional use of the anterior portal. An attempt to resect the posterior horn in a blind fashion may result in iatrogenic damage of the articular cartilage in the posterior compartment, over-resection of a remnant meniscus, or an insufficient resection of the torn fragment. We describe the use of the posteromedial portal for an accurate visualization and resection of the posterior attachment of a bucket-handle tear for arthroscopic partial meniscectomy, as well as detection of other injuries that may be involved in the posteromedial compartment, while avoiding injury to other intra-articular structures during the arthroscopic procedure. We found that the use of the posteromedial portal is a safe and efficient method in removing a bucket-handle tear of the medial meniscus in one piece.
Handling Emergency Management in [an] Object Oriented Modeling Environment
NASA Technical Reports Server (NTRS)
Tokgoz, Berna Eren; Cakir, Volkan; Gheorghe, Adrian V.
2010-01-01
It has been understood that protection of a nation from extreme disasters is a challenging task. Impacts of extreme disasters on a nation's critical infrastructures, economy and society could be devastating. A protection plan itself would not be sufficient when a disaster strikes. Hence, there is a need for a holistic approach to establish more resilient infrastructures to withstand extreme disasters. A resilient infrastructure can be defined as a system or facility that is able to withstand damage, but if affected, can be readily and cost-effectively restored. The key issue to establish resilient infrastructures is to incorporate existing protection plans with comprehensive preparedness actions to respond, recover and restore as quickly as possible, and to minimize extreme disaster impacts. Although national organizations will respond to a disaster, extreme disasters need to be handled mostly by local emergency management departments. Since emergency management departments have to deal with complex systems, they have to have a manageable plan and efficient organizational structures to coordinate all these systems. A strong organizational structure is the key in responding fast before and during disasters, and recovering quickly after disasters. In this study, the entire emergency management is viewed as an enterprise and modelled through enterprise management approach. Managing an enterprise or a large complex system is a very challenging task. It is critical for an enterprise to respond to challenges in a timely manner with quick decision making. This study addresses the problem of handling emergency management at regional level in an object oriented modelling environment developed by use of TopEase software. Emergency Operation Plan of the City of Hampton, Virginia, has been incorporated into TopEase for analysis. The methodology used in this study has been supported by a case study on critical infrastructure resiliency in Hampton Roads.
Compact multiwire proportional counters for the detection of fission fragments
NASA Astrophysics Data System (ADS)
Jhingan, Akhil; Sugathan, P.; Golda, K. S.; Singh, R. P.; Varughese, T.; Singh, Hardev; Behera, B. R.; Mandal, S. K.
2009-12-01
Two large area multistep position sensitive (two dimensional) multiwire proportional counters have been developed for experiments involving study of fission dynamics using general purpose scattering chamber facility at IUAC. Both detectors have an active area of 20×10 cm2 and provide position signals in horizontal (X) and vertical (Y) planes, timing signal for time of flight measurements and energy signal giving the differential energy loss in the active volume. The design features are optimized for the detection of low energy heavy ions at very low gas pressures. Special care was taken in setting up the readout electronics, constant fraction discriminators for position signals in particular, to get optimum position and timing resolutions along with high count rate handling capability of low energy heavy ions. A custom made charge sensitive preamplifier, having lower gain and shorter decay time, has been developed for extracting the differential energy loss signal. The position and time resolutions of the detectors were determined to be 1.1 mm full width at half maximum (FWHM) and 1.7 ns FWHM, respectively. The detector could handle heavy ion count rates exceeding 20 kHz without any breakdown. Time of flight signal in combination with differential energy loss signal gives a clean separation of fission fragments from projectile and target like particles. The timing and position signals of the detectors are used for fission coincidence measurements and subsequent extraction of their mass, angular, and total kinetic energy distributions. This article describes systematic study of these fission counters in terms of efficiency, time resolution, count rate handling capability, position resolution, and the readout electronics. The detector has been operated with both five electrode geometry and four electrode geometry, and a comparison has been made in their performances.
Multigrid treatment of implicit continuum diffusion
NASA Astrophysics Data System (ADS)
Francisquez, Manaure; Zhu, Ben; Rogers, Barrett
2017-10-01
Implicit treatment of diffusive terms of various differential orders common in continuum mechanics modeling, such as computational fluid dynamics, is investigated with spectral and multigrid algorithms in non-periodic 2D domains. In doubly periodic time dependent problems these terms can be efficiently and implicitly handled by spectral methods, but in non-periodic systems solved with distributed memory parallel computing and 2D domain decomposition, this efficiency is lost for large numbers of processors. We built and present here a multigrid algorithm for these types of problems which outperforms a spectral solution that employs the highly optimized FFTW library. This multigrid algorithm is not only suitable for high performance computing but may also be able to efficiently treat implicit diffusion of arbitrary order by introducing auxiliary equations of lower order. We test these solvers for fourth and sixth order diffusion with idealized harmonic test functions as well as a turbulent 2D magnetohydrodynamic simulation. It is also shown that an anisotropic operator without cross-terms can improve model accuracy and speed, and we examine the impact that the various diffusion operators have on the energy, the enstrophy, and the qualitative aspect of a simulation. This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).
SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES
Wan, Xiaohai; Li, Zhilin
2012-01-01
Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size. PMID:22701346
SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES.
Wan, Xiaohai; Li, Zhilin
2012-06-01
Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size.
NASA Technical Reports Server (NTRS)
Waszak, M. R.; Schmidt, D. S.
1985-01-01
As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Factors influencing aircraft ground handling performance
NASA Technical Reports Server (NTRS)
Yager, T. J.
1983-01-01
Problems associated with aircraft ground handling operations on wet runways are discussed and major factors which influence tire/runway braking and cornering traction capability are identified including runway characteristics, tire hydroplaning, brake system anomalies, and pilot inputs. Research results from tests with instrumented ground vehicles and aircraft, and aircraft wet runway accident investigation are summarized to indicate the effects of different aircraft, tire, and runway parameters. Several promising means are described for improving tire/runway water drainage capability, brake system efficiency, and pilot training to help optimize aircraft traction performance on wet runways.
An O(n(5)) algorithm for MFE prediction of kissing hairpins and 4-chains in nucleic acids.
Chen, Ho-Lin; Condon, Anne; Jabbari, Hosna
2009-06-01
Efficient methods for prediction of minimum free energy (MFE) nucleic secondary structures are widely used, both to better understand structure and function of biological RNAs and to design novel nano-structures. Here, we present a new algorithm for MFE secondary structure prediction, which significantly expands the class of structures that can be handled in O(n(5)) time. Our algorithm can handle H-type pseudoknotted structures, kissing hairpins, and chains of four overlapping stems, as well as nested substructures of these types.
Handling qualities of large flexible control-configured aircraft
NASA Technical Reports Server (NTRS)
Swaim, R. L.
1979-01-01
The approach to an analytical study of flexible airplane longitudinal handling qualities was to parametrically vary the natural frequencies of two symmetric elastic modes to induce mode interactions with the rigid body dynamics. Since the structure of the pilot model was unknown for such dynamic interactions, the optimal control pilot modeling method is being applied and used in conjunction with pilot rating method.
1997 Technology Applications Report,
1997-01-01
handle high -power loads at microwave radio frequencies , microwave vacuum tubes remain the chosen technology to amplify high power. Aria Microwave...structure called the active RF cavity amplifier (ARFCA). With this design , the amplifier handles high -power loads at radio and microwave frequencies ...developed this technology using BMDO-funded modeling methods designed to simulate the dynamics of large space-based structures. Because it increases
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
TransAtlasDB: an integrated database connecting expression data, metadata and variants
Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J
2018-01-01
Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-01-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650
Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen
2006-04-01
Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.
Gill, C O; Jones, T
2002-06-01
On eight occasions, five volunteers each handled five pieces of meat with bare hands or while wearing dry or wet knitted gloves or rubber gloves after hands had been inoculated with Escherichia coli or after handling a piece of meat inoculated with E. coli. On each occasion, after all meat was handled, each piece of meat, glove, and hand were sampled to recover E. coli. When hands were inoculated, E. coli was recovered from all meat handled with bare hands, in lesser numbers from some pieces handled with knitted gloves, and from only one piece handled with rubber gloves. When pieces of inoculated meat were handled, the numbers of E. coli transferred to uninoculated meat from bare hands or rubber gloves decreased substantially with each successive piece of uninoculated meat, but decreases were small with knitted gloves. The findings indicate that, compared with bare hands, the use of knitted gloves could reduce the transfer of bacteria from hands to meat but could increase the transfer of bacteria between meat pieces, whereas the use of rubber gloves could largely prevent the first and greatly reduce the second type of bacteria transfer.
NASA Astrophysics Data System (ADS)
Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele
2017-04-01
The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.
Design and evaluation of a new ergonomic handle for instruments in minimally invasive surgery.
Sancibrian, Ramon; Gutierrez-Diez, María C; Torre-Ferrero, Carlos; Benito-Gonzalez, Maria A; Redondo-Figuero, Carlos; Manuel-Palazuelos, Jose C
2014-05-01
Laparoscopic surgery techniques have been demonstrated to provide massive benefits to patients. However, surgeons are subjected to hardworking conditions because of the poor ergonomic design of the instruments. In this article, a new ergonomic handle design is presented. This handle is designed using ergonomic principles, trying to provide both more intuitive manipulation of the instrument and a shape that reduces the high-pressure zones in the contact with the surgeon's hand. The ergonomic characteristics of the new handle were evaluated using objective and subjective studies. The experimental evaluation was performed using 28 volunteers by means of the comparison of the new handle with the ring-handle (RH) concept in an instrument available on the market. The volunteers' muscle activation and motions of the hand, wrist, and arm were studied while they performed different tasks. The data measured in the experiment include electromyography and goniometry values. The results obtained from the subjective analysis reveal that most volunteers (64%) preferred the new prototype to the RH, reporting less pain and less difficulty to complete the tasks. The results from the objective study reveal that the hyperflexion of the wrist required for the manipulation of the instrument is strongly reduced. The new ergonomic handle not only provides important ergonomic advantages but also improves the efficiency when completing the tasks. Compared with RH instruments, the new prototype reduced the high-pressure areas and the extreme motions of the wrist. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Spangenberg, T.; Göttlicher, J.; Steininger, R.
2016-05-01
An efficient referencing and sample positioning system is a basic tool for a micro focus beamline at a synchrotron. The seven years ago introduced command line based system was upgraded at SUL-X beamline at ANKA [1]. A new combination of current server client techniques offers direct control and facilitates unexperienced users the handling of this frequently used tool.
NASA Technical Reports Server (NTRS)
Boyle, A. R.; Dangermond, J.; Marble, D.; Simonett, D. S.; Tomlinson, R. F.
1983-01-01
Problems and directions for large scale geographic information system development were reviewed and the general problems associated with automated geographic information systems and spatial data handling were addressed.
NASA Technical Reports Server (NTRS)
Mattick, A. T.; Hertzberg, A.
1984-01-01
A heat rejection system for space is described which uses a recirculating free stream of liquid droplets in place of a solid surface to radiate waste heat. By using sufficiently small droplets ( 100 micron diameter) of low vapor pressure liquids the radiating droplet sheet can be made many times lighter than the lightest solid surface radiators (heat pipes). The liquid droplet radiator (LDR) is less vulnerable to damage by micrometeoroids than solid surface radiators, and may be transported into space far more efficiently. Analyses are presented of LDR applications in thermal and photovoltaic energy conversion which indicate that fluid handling components (droplet generator, droplet collector, heat exchanger, and pump) may comprise most of the radiator system mass. Even the unoptimized models employed yield LDR system masses less than heat pipe radiator system masses, and significant improvement is expected using design approaches that incorporate fluid handling components more efficiently. Technical problems (e.g., spacecraft contamination and electrostatic deflection of droplets) unique to this method of heat rejectioon are discussed and solutions are suggested.
NASA Technical Reports Server (NTRS)
Mattick, A. T.; Hertzberg, A.
1981-01-01
A heat rejection system for space is described which uses a recirculating free stream of liquid droplets in place of a solid surface to radiate waste heat. By using sufficiently small droplets (less than about 100 micron diameter) of low vapor pressure liquids (tin, tin-lead-bismuth eutectics, vacuum oils) the radiating droplet sheet can be made many times lighter than the lightest solid surface radiators (heat pipes). The liquid droplet radiator (LDR) is less vulnerable to damage by micrometeoroids than solid surface radiators, and may be transported into space far more efficiently. Analyses are presented of LDR applications in thermal and photovoltaic energy conversion which indicate that fluid handling components (droplet generator, droplet collector, heat exchanger, and pump) may comprise most of the radiator system mass. Even the unoptimized models employed yield LDR system masses less than heat pipe radiator system masses, and significant improvement is expected using design approaches that incorporate fluid handling components more efficiently. Technical problems (e.g., spacecraft contamination and electrostatic deflection of droplets) unique to this method of heat rejection are discussed and solutions are suggested.
Kinnunen, J; Pohjonen, H
2001-07-01
A 3-year PACS project was started in 1997 and completed in 1999 with filmless radiology and surgery. An efficient network for transferring images provides the infrastructure for integration of different distributed imaging systems and enables efficient handling of all patient-related information on one display station. Because of the need for high-speed communications and the massive amount of image data transferred in radiology, ATM (25, 155 Mbit/s) was chosen to be the main technology used. Both hardware and software redundancy of the system have been carefully planned. The size of the Dicom image library utilizing MO discs is currently 1.2 TB with 300 GB RAID capacity. For the increasing amount of teleradiologic consultations, a special Dicom gateway is planned. It allows a centralized and resilient handling and routing of received images around the hospital. Hospital-wide PACS has already improved the speed and quality of patient care by providing instant access to diagnostic information at multiple locations simultaneously. The benefits of PACS are considered from the viewpoint of the entire hospital: PACS offers a method for efficiently transporting patient-related images and reports to the referring physicians.
Grant, Suzanne; Checkland, Katherine; Bowie, Paul; Guthrie, Bruce
2017-04-27
The handling of laboratory, imaging and other test results in UK general practice is a high-volume organisational routine that is both complex and high risk. Previous research in this area has focused on errors and harm, but a complementary approach is to better understand how safety is achieved in everyday practice. This paper ethnographically examines the role of informal dimensions of test results handling routines in the achievement of safety in UK general practice and how these findings can best be developed for wider application by policymakers and practitioners. Non-participant observation was conducted of high-volume organisational routines across eight UK general practices with diverse organisational characteristics. Sixty-two semi-structured interviews were also conducted with the key practice staff alongside the analysis of relevant documents. While formal results handling routines were described similarly across the eight study practices, the everyday structure of how the routine should be enacted in practice was informally understood. Results handling safety took a range of local forms depending on how different aspects of safety were prioritised, with practices varying in terms of how they balanced thoroughness (i.e. ensuring the high-quality management of results by the most appropriate clinician) and efficiency (i.e. timely management of results) depending on a range of factors (e.g. practice history, team composition). Each approach adopted created its own potential risks, with demands for thoroughness reducing productivity and demands for efficiency reducing handling quality. Irrespective of the practice-level approach adopted, staff also regularly varied what they did for individual patients depending on the specific context (e.g. type of result, patient circumstances). General practices variably prioritised a legitimate range of results handling safety processes and outcomes, each with differing strengths and trade-offs. Future safety improvement interventions should focus on how to maximise practice-level knowledge and understanding of the range of context-specific approaches available and the safeties and risks inherent in each within the context of wider complex system conditions and interactions. This in turn has the potential to inform new kinds of proactive, contextually appropriate approaches to intervention development and implementation focusing on the enhanced deliberation of the safety of existing high-volume routines.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
Engineering of filamentous bacteriophage for protein sensing
NASA Astrophysics Data System (ADS)
Brasino, Michael
Methods of high throughput, sensitive and cost effective quantification of proteins enables personalized medicine by allowing healthcare professionals to better monitor patient condition and response to treatment. My doctoral research has attempted to advance these methods through the use of filamentous bacteriophage (phage). These bacterial viruses are particularly amenable to both genetic and chemical engineering and can be produced efficiently in large amounts. Here, I discuss several strategies for modifying phage for use in protein sensing assays. These include the expression of bio-orthogonal conjugation handles on the phage coat, the incorporation of specific recognition sequences within the phage genome, and the creation of antibody-phage conjugates via a photo-crosslinking non-canonical amino acid. The physical and chemical characterization of these engineered phage and the results of their use in modified protein sensing assays will be presented.
Phaser.MRage: automated molecular replacement
Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J.; Oeffner, Robert D.; Adams, Paul D.; Read, Randy J.
2013-01-01
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement. PMID:24189240
A multilevel correction adaptive finite element method for Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
Phaser.MRage: automated molecular replacement.
Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J; Oeffner, Robert D; Adams, Paul D; Read, Randy J
2013-11-01
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.
NASA Technical Reports Server (NTRS)
Baghdadi, A.; Gurtler, R. W.; Legge, R.; Sopori, B.; Rice, M. J.; Ellis, R. J.
1979-01-01
A technique for growing limited-length ribbons continually was demonstrated. This Rigid Edge technique can be used to recrystallize about 95% of the polyribbon feedstock. A major advantage of this method is that only a single, constant length silicon ribbon is handled throughout the entire process sequence; this may be accomplished using cassettes similar to those presently in use for processing Czochralski waters. Thus a transition from Cz to ribbon technology can be smoothly affected. The maximum size being considered, 3 inches x 24 inches, is half a square foot, and will generate 6 watts for 12% efficiency at 1 sun. Silicon dioxide has been demonstrated as an effective, practical diffusion barrier for use during the polyribbon formation.
Optically enhanced acoustophoresis
NASA Astrophysics Data System (ADS)
McDougall, Craig; O'Mahoney, Paul; McGuinn, Alan; Willoughby, Nicholas A.; Qiu, Yongqiang; Demore, Christine E. M.; MacDonald, Michael P.
2017-08-01
Regenerative medicine has the capability to revolutionise many aspects of medical care, but for it to make the step from small scale autologous treatments to larger scale allogeneic approaches, robust and scalable label free cell sorting technologies are needed as part of a cell therapy bioprocessing pipeline. In this proceedings we describe several strategies for addressing the requirements for high throughput without labeling via: dimensional scaling, rare species targeting and sorting from a stable state. These three approaches are demonstrated through a combination of optical and ultrasonic forces. By combining mostly conservative and non-conservative forces from two different modalities it is possible to reduce the influence of flow velocity on sorting efficiency, hence increasing robustness and scalability. One such approach can be termed "optically enhanced acoustophoresis" which combines the ability of acoustics to handle large volumes of analyte with the high specificity of optical sorting.
NASA Astrophysics Data System (ADS)
Jannasch, Anita; Demirörs, Ahmet F.; van Oostrum, Peter D. J.; van Blaaderen, Alfons; Schäffer, Erik
2012-07-01
Optical tweezers are exquisite position and force transducers and are widely used for high-resolution measurements in fields as varied as physics, biology and materials science. Typically, small dielectric particles are trapped in a tightly focused laser and are often used as handles for sensitive force measurements. Improvement to the technique has largely focused on improving the instrument and shaping the light beam, and there has been little work exploring the benefit of customizing the trapped object. Here, we describe how anti-reflection coated, high-refractive-index core-shell particles composed of titania enable single-beam optical trapping with an optical force greater than a nanonewton. The increased force range broadens the scope of feasible optical trapping experiments and will pave the way towards more efficient light-powered miniature machines, tools and applications.
Scientific workflows as productivity tools for drug discovery.
Shon, John; Ohkawa, Hitomi; Hammer, Juergen
2008-05-01
Large pharmaceutical companies annually invest tens to hundreds of millions of US dollars in research informatics to support their early drug discovery processes. Traditionally, most of these investments are designed to increase the efficiency of drug discovery. The introduction of do-it-yourself scientific workflow platforms has enabled research informatics organizations to shift their efforts toward scientific innovation, ultimately resulting in a possible increase in return on their investments. Unlike the handling of most scientific data and application integration approaches, researchers apply scientific workflows to in silico experimentation and exploration, leading to scientific discoveries that lie beyond automation and integration. This review highlights some key requirements for scientific workflow environments in the pharmaceutical industry that are necessary for increasing research productivity. Examples of the application of scientific workflows in research and a summary of recent platform advances are also provided.
Praxis language reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, J.H.
1981-01-01
This document is a language reference manual for the programming language Praxis. The document contains the specifications that must be met by any compiler for the language. The Praxis language was designed for systems programming in real-time process applications. Goals for the language and its implementations are: (1) highly efficient code generated by the compiler; (2) program portability; (3) completeness, that is, all programming requirements can be met by the language without needing an assembler; and (4) separate compilation to aid in design and management of large systems. The language does not provide any facilities for input/output, stack and queuemore » handling, string operations, parallel processing, or coroutine processing. These features can be implemented as routines in the language, using machine-dependent code to take advantage of facilities in the control environment on different machines.« less
Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites
NASA Astrophysics Data System (ADS)
Herbold, E. B.; Cai, J.; Benson, D. J.; Nesterenko, V. F.
2007-12-01
Recent investigations of the dynamic compressive strength of cold isostatically pressed composites of polytetrafluoroethylene (PTFE), tungsten (W) and aluminum (Al) powders show significant differences depending on the size of metallic particles. The addition of W increases the density and changes the overall strength of the sample depending on the size of W particles. To investigate relatively large deformations, multi-material Eulerian and arbitrary Lagrangian-Eulerian methods, which have the ability to efficiently handle the formation of free surfaces, were used. The calculations indicate that the increased sample strength with fine metallic particles is due to the dynamic formation of force chains. This phenomenon occurs for samples with a higher porosity of the PTFE matrix compared to samples with larger particle size of W and a higher density PTFE matrix.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem
Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-01
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.
Applications of flight control system methods to an advanced combat rotorcraft
NASA Technical Reports Server (NTRS)
Tischler, Mark B.; Fletcher, Jay W.; Morris, Patrick M.; Tucker, George T.
1989-01-01
Advanced flight control system design, analysis, and testing methodologies developed at the Ames Research Center are applied in an analytical and flight test evaluation of the Advanced Digital Optical Control System (ADOCS) demonstrator. The primary objectives are to describe the knowledge gained about the implications of digital flight control system design for rotorcraft, and to illustrate the analysis of the resulting handling-qualities in the context of the proposed new handling-qualities specification for rotorcraft. Topics covered in-depth are digital flight control design and analysis methods, flight testing techniques, ADOCS handling-qualities evaluation results, and correlation of flight test results with analytical models and the proposed handling-qualities specification. The evaluation of the ADOCS demonstrator indicates desirable response characteristics based on equivalent damping and frequency, but undersirably large effective time-delays (exceeding 240 m sec in all axes). Piloted handling-qualities are found to be desirable or adequate for all low, medium, and high pilot gain tasks; but handling-qualities are inadequate for ultra-high gain tasks such as slope and running landings.
Pre-measured Products Boost Efficiency, Cut Costs
ERIC Educational Resources Information Center
Miller, Floyd G.; Pleasant, James
1974-01-01
Reduction of storage requirements, purchasing problems, handling, and costs has become a reality through the use of cleaning products in portion-controlled packages at the Chicago Circle Campus and the Medical Center of the University of Illinois. (Author)
NASA Technical Reports Server (NTRS)
Knoell, A. C.
1972-01-01
Computer program has been specifically developed to handle, in an efficient and cost effective manner, planar wound pressure vessels fabricated of either boron-epoxy or graphite-epoxy advanced composite materials.
DIETARY EXPOSURES OF YOUNG CHILDREN, PART 3: MODELLING
A deterministic model was used to model dietary exposure of young children. Parameters included pesticide residue on food before handling, surface pesticide loading, transfer efficiencies and children's activity patterns. Three components of dietary pesticide exposure were includ...
Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework
NASA Astrophysics Data System (ADS)
Winklehner, Daniel; Adelmann, Andreas; Gsell, Achim; Kaman, Tulin; Campo, Daniela
2017-12-01
We present an upgrade to the particle-in-cell ion beam simulation code opal that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron. This upgrade includes a new geometry class and field solver that can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields. Results are benchmarked against the analytical solution of a coasting beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV /amu test cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation results. We find that opal can now handle arbitrary boundary geometries with relative ease. Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.
Papenmeier, Frank; Huff, Markus
2010-02-01
Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.
NASA Technical Reports Server (NTRS)
Ramaswamy, Shankar; Banerjee, Prithviraj
1994-01-01
Appropriate data distribution has been found to be critical for obtaining good performance on Distributed Memory Multicomputers like the CM-5, Intel Paragon and IBM SP-1. It has also been found that some programs need to change their distributions during execution for better performance (redistribution). This work focuses on automatically generating efficient routines for redistribution. We present a new mathematical representation for regular distributions called PITFALLS and then discuss algorithms for redistribution based on this representation. One of the significant contributions of this work is being able to handle arbitrary source and target processor sets while performing redistribution. Another important contribution is the ability to handle an arbitrary number of dimensions for the array involved in the redistribution in a scalable manner. Our implementation of these techniques is based on an MPI-like communication library. The results presented show the low overheads for our redistribution algorithm as compared to naive runtime methods.
Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun
2011-07-01
Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.
Use of Flowtran Simulation in Education
ERIC Educational Resources Information Center
Clark, J. Peter; Sommerfeld, Jude T.
1976-01-01
Describes the use in chemical engineering education of FLOWTRAN, a large steady-state simulator of chemical processes with extensive facilities for physical and thermodynamic data-handling and a large library of equipment modules, including cost estimation capability. (MLH)
Handling a Large Collection of PDF Documents
You have several options for making a large collection of PDF documents more accessible to your audience: avoid uploading altogether, use multiple document pages, and use document IDs as anchors for direct links within a document page.
DCMS: A data analytics and management system for molecular simulation.
Kumar, Anand; Grupcev, Vladimir; Berrada, Meryem; Fogarty, Joseph C; Tu, Yi-Cheng; Zhu, Xingquan; Pandit, Sagar A; Xia, Yuni
Molecular Simulation (MS) is a powerful tool for studying physical/chemical features of large systems and has seen applications in many scientific and engineering domains. During the simulation process, the experiments generate a very large number of atoms and intend to observe their spatial and temporal relationships for scientific analysis. The sheer data volumes and their intensive interactions impose significant challenges for data accessing, managing, and analysis. To date, existing MS software systems fall short on storage and handling of MS data, mainly because of the missing of a platform to support applications that involve intensive data access and analytical process. In this paper, we present the database-centric molecular simulation (DCMS) system our team developed in the past few years. The main idea behind DCMS is to store MS data in a relational database management system (DBMS) to take advantage of the declarative query interface ( i.e. , SQL), data access methods, query processing, and optimization mechanisms of modern DBMSs. A unique challenge is to handle the analytical queries that are often compute-intensive. For that, we developed novel indexing and query processing strategies (including algorithms running on modern co-processors) as integrated components of the DBMS. As a result, researchers can upload and analyze their data using efficient functions implemented inside the DBMS. Index structures are generated to store analysis results that may be interesting to other users, so that the results are readily available without duplicating the analysis. We have developed a prototype of DCMS based on the PostgreSQL system and experiments using real MS data and workload show that DCMS significantly outperforms existing MS software systems. We also used it as a platform to test other data management issues such as security and compression.
NASA Technical Reports Server (NTRS)
Grantham, W. D.; Smith, P. M.; Deal, P. L.; Neely, W. R., Jr.
1984-01-01
A six-degree-of-freedom, ground based simulator study is conducted to evaluate the low-speed flight characteristics of four dissimilar cargo transport airplanes. These characteristics are compared with those of a large, present-day (reference) transport configuration similar to the Lockheed C-5A airplane. The four very large transport concepts evaluated consist of single-fuselage, twin-fuselage, triple-fuselage, and span-loader configurations. The primary piloting task is the approach and landing operation. The results of his study indicate that all four concepts evaluated have unsatisfactory longitudinal and lateral directional low speed flight characteristics and that considerable stability and control augmentation would be required to improve these characteristics (handling qualities) to a satisfactory level. Through the use of rate command/attitude hold augmentation in the pitch and roll axes, and the use of several turn-coordination features, the handling qualities of all four large transports simulated are improved appreciably.
Verwer, Cynthia M; van der Ark, Arno; van Amerongen, Geert; van den Bos, Ruud; Hendriksen, Coenraad F M
2009-04-01
This paper describes the results of a study of the effects of modified housing conditions, conditioning and habituation on humans using a rabbit model for monitoring whole-cell pertussis vaccine (pWCV)-induced adverse effects. The study has been performed with reference to previous vaccine safety studies of pWCV in rabbits in which results were difficult to interpret due to the large variation in experimental outcome, especially in the key parameter deep-body temperature (T(b)). Certain stressful laboratory conditions, as well as procedures involving humans, e.g. blood sampling, inoculation and cage-cleaning, were hypothesized to cause this large variation. The results of this study show that under modified housing conditions rabbits have normal circadian body temperatures. This allowed discrimination of pWCV-induced adverse effects in which handled rabbits tended to show a dose-related increase in temperature after inoculation with little variance, whereas non-handled rabbits did not. Effects of experimental and routine procedures on body temperature were significantly reduced under modified conditions and were within the normal T(b) range. Handled animals reacted less strongly and with less variance to experimental procedures, such as blood sampling, injection and cage-cleaning, than non-handled rabbits. Overall, handling had a positive effect on the behaviour of the animals. Data show that the housing modifications have provided a more robust model for monitoring pWCV adverse effects. Furthermore, conditioning and habituation of rabbits to humans reduce the variation in experimental outcome, which might allow for a reduction in the number of animals used. In addition, this also reduces distress and thus contributes to refining this animal model.
Novel Binders and Methods for Agglomeration of Ore
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. K. Kawatra; T. C. Eisele; K. A. Lewandowski
2006-12-31
Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily. Primary examples of this are copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process, and advanced ironmaking processes, where binders must function satisfactorily over an extraordinarily large range of temperatures (from room temperature up to over 1200 C). As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching and advanced primary ironmaking. This project has identified several acid-resistant binders and agglomeration procedures that can be used for improving the energy efficiency of heap leaching, by preventing the ''ponding'' and ''channeling'' effects that currently cause reduced recovery and extended leaching cycle times. Methods have also been developed for iron ore processing which are intended to improve the performance of pellet binders, and have directly saved energy by increasing filtration rates of the pelletization feed by as much as 23%.« less
Thermodynamic evaluation of transonic compressor rotors using the finite volume approach
NASA Technical Reports Server (NTRS)
Moore, John; Nicholson, Stephen; Moore, Joan G.
1986-01-01
The development of a computational capability to handle viscous flow with an explicit time-marching method based on the finite volume approach is summarized. Emphasis is placed on the extensions to the computational procedure which allow the handling of shock induced separation and large regions of strong backflow. Appendices contain abstracts of papers and whole reports generated during the contract period.
Procedures to handle inventory cluster plots that straddle two or more conditions
Jerold T. Hahn; Colin D. MacLean; Stanford L. Arner; William A. Bechtold
1995-01-01
We review the relative merits and field procedures for four basic plot designs to handle forest inventory plots that straddle two or more conditions, given that subplots will not be moved. A cluster design is recommended that combines fixed-area subplots and variable-radius plot (VRP) sampling. Each subplot in a cluster consists of a large fixed-area subplot for...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-19
... iron handle, 1 iron handle fragment, 1 iron bowl fragment, 2 iron keys, 1 iron hinge, 1 iron gun hammer, 2 iron gun pieces, 1 fish hook, 12 nails, 1 iron ring, 1 coffee mill, 1 possible iron file, 1 large iron tack, 4 iron rods, 3 unidentified iron fragments, 1 metal tube, 1 scissors fragment, 1 finial...
Almeida, F; Oliveira, F; Neves, R; Siqueira, N; Rodrigues-Silva, R; Daipert-Garcia, D; Machado-Silva, J R
2015-07-01
Polycystic echinococcosis, caused by the larval stage (metacestode) of the small-sized tapeworm, Echinococcus vogeli, is an emerging parasitic zoonosis of great public health concern in the humid tropical rainforests of South and Central America. Because morphological and morphometric characteristics of the metacestode are not well known, hydatid cysts from the liver and the mesentery were examined from patients following surgical procedures. Whole mounts of protoscoleces with rostellar hooks were examined under light and confocal laser scanning microscopy. Measurements were made of both large and small hooks, including the total area, total length, total width, blade area, blade length, blade width, handle area, handle length and handle width. The results confirmed the 1:1 arrangement of hooks in the rostellar pad and indicated, for the first time, that the morphometry of large and small rostellar hooks varies depending upon the site of infection. Light and confocal microscopy images displayed clusters of calcareous corpuscles in the protoscoleces. In conclusion, morphological features of large and small rostellar hooks of E. vogeli are adapted to a varied environment within the vertebrate host and such morphological changes in calcareous corpuscles occur at different stages in the maturation of metacestodes.
Parallel evolutionary computation in bioinformatics applications.
Pinho, Jorge; Sobral, João Luis; Rocha, Miguel
2013-05-01
A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Ionescu, Crina-Maria; Sehnal, David; Falginella, Francesco L; Pant, Purbaj; Pravda, Lukáš; Bouchal, Tomáš; Svobodová Vařeková, Radka; Geidl, Stanislav; Koča, Jaroslav
2015-01-01
Partial atomic charges are a well-established concept, useful in understanding and modeling the chemical behavior of molecules, from simple compounds, to large biomolecular complexes with many reactive sites. This paper introduces AtomicChargeCalculator (ACC), a web-based application for the calculation and analysis of atomic charges which respond to changes in molecular conformation and chemical environment. ACC relies on an empirical method to rapidly compute atomic charges with accuracy comparable to quantum mechanical approaches. Due to its efficient implementation, ACC can handle any type of molecular system, regardless of size and chemical complexity, from drug-like molecules to biomacromolecular complexes with hundreds of thousands of atoms. ACC writes out atomic charges into common molecular structure files, and offers interactive facilities for statistical analysis and comparison of the results, in both tabular and graphical form. Due to high customizability and speed, easy streamlining and the unified platform for calculation and analysis, ACC caters to all fields of life sciences, from drug design to nanocarriers. ACC is freely available via the Internet at http://ncbr.muni.cz/ACC.
Classification of hyperspectral imagery with neural networks: comparison to conventional tools
NASA Astrophysics Data System (ADS)
Merényi, Erzsébet; Farrand, William H.; Taranik, James V.; Minor, Timothy B.
2014-12-01
Efficient exploitation of hyperspectral imagery is of great importance in remote sensing. Artificial intelligence approaches have been receiving favorable reviews for classification of hyperspectral data because the complexity of such data challenges the limitations of many conventional methods. Artificial neural networks (ANNs) were shown to outperform traditional classifiers in many situations. However, studies that use the full spectral dimensionality of hyperspectral images to classify a large number of surface covers are scarce if non-existent. We advocate the need for methods that can handle the full dimensionality and a large number of classes to retain the discovery potential and the ability to discriminate classes with subtle spectral differences. We demonstrate that such a method exists in the family of ANNs. We compare the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set. Rigorous evaluation of the classification accuracies shows that the ANN outperforms the other methods and achieves ≈90% accuracy on test data.
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...
2017-11-07
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Efficient data management tools for the heterogeneous big data warehouse
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.
2016-09-01
The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.
Exploratory Item Classification Via Spectral Graph Clustering
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2017-01-01
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476
Quantifying the effect of editor-author relations on manuscript handling times.
Sarigöl, Emre; Garcia, David; Scholtes, Ingo; Schweitzer, Frank
2017-01-01
In this article we study to what extent the academic peer review process is influenced by social relations between the authors of a manuscript and the editor handling the manuscript. Taking the open access journal PlosOne as a case study, our analysis is based on a data set of more than 100,000 articles published between 2007 and 2015. Using available data on handling editor, submission and acceptance time of manuscripts, we study the question whether co-authorship relations between authors and the handling editor affect the manuscript handling time , i.e. the time taken between the submission and acceptance of a manuscript. Our analysis reveals (1) that editors handle papers co-authored by previous collaborators significantly more often than expected at random, and (2) that such prior co-author relations are significantly related to faster manuscript handling. Addressing the question whether these shorter manuscript handling times can be explained by the quality of publications, we study the number of citations and downloads which accepted papers eventually accumulate. Moreover, we consider the influence of additional (social) factors, such as the editor's experience, the topical similarity between authors and editors, as well as reciprocal citation relations between authors and editors. Our findings show that, even when correcting for other factors like time, experience, and performance, prior co-authorship relations have a large and significant influence on manuscript handling times, speeding up the editorial decision on average by 19 days.
Pilot-Induced Oscillation Prediction With Three Levels of Simulation Motion Displacement
NASA Technical Reports Server (NTRS)
Schroeder, Jeffery A.; Chung, William W. Y.; Tran, Duc T.; Laforce, Soren; Bengford, Norman J.
2001-01-01
Simulator motion platform characteristics were examined to determine if the amount of motion affects pilot-induced oscillation (PIO) prediction. Five test pilots evaluated how susceptible 18 different sets of pitch dynamics were to PIOs with three different levels of simulation motion platform displacement: large, small, and none. The pitch dynamics were those of a previous in-flight experiment, some of which elicited PIOs These in-flight results served as truth data for the simulation. As such, the in-flight experiment was replicated as much as possible. Objective and subjective data were collected and analyzed With large motion, PIO and handling qualities ratings matched the flight data more closely than did small motion or no motion. Also, regardless of the aircraft dynamics, large motion increased pilot confidence in assigning handling qualifies ratings, reduced safety pilot trips, and lowered touchdown velocities. While both large and small motion provided a pitch rate cue of high fidelity, only large motion presented the pilot with a high fidelity vertical acceleration cue.
Orbital maneuvering end effectors
NASA Technical Reports Server (NTRS)
Myers, W. Neill (Inventor); Forbes, John C. (Inventor); Barnes, Wayne L. (Inventor)
1986-01-01
This invention relates to an end effector device for grasping and maneuvering objects such as berthing handles of a space telescope. The device includes a V-shaped capture window defined as inclined surfaces in parallel face plates which converge toward a retainer recess in which the handle is retained. A pivotal finger (30) meshes with a pair of pivoted fingers which rotate in counterrotation. The fingers rotate to pull a handle within the capture window into recess where latches lock handle in the recess. To align the capture window, plates may be cocked plus or minus five degrees on base. Drive means is included in the form of a motor coupled with a harmonic drive speed reducer, which provides for slow movement of the fingers at a high torque so that large articles may be handled. Novelty of the invention is believed to reside in the combined intermeshing finger structure, drive means and the harmonic drive speed reducer, which features provide the required maneuverability and strength.
An instrument to aid tubal sterilization by laparoscopy.
Siegler, A M
1972-05-01
A single-handled instrument, developed by Siegler for his two-incision technique, has broad biopsy capability. The shaft and handle are insulated to protect the operator from shock; the jaws rotate independently from the handle position; an O-ring seal in the cannula eliminates the need for external sealing devices for carbon dioxide maintenance; and either the cutting or coagulation power may be applied. The biopsy instrument can coagulate and biopsy both tubes without removing the forceps after treating one side since the jaws are large enough to accommodate both segments. The instrument is manufactured by the American Cystoscope Makers, Inc., Pelham Manor, New York.
Evaluation of a New Remote Handling Design for High Throughput Annular Centrifugal Contactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
David H. Meikrantz; Troy G. Garn; Jack D. Law
2009-09-01
Advanced designs of nuclear fuel recycling plants are expected to include more ambitious goals for aqueous based separations including; higher separations efficiency, high-level waste minimization, and a greater focus on continuous processes to minimize cost and footprint. Therefore, Annular Centrifugal Contactors (ACCs) are destined to play a more important role for such future processing schemes. Previous efforts defined and characterized the performance of commercial 5 cm and 12.5 cm single-stage ACCs in a “cold” environment. The next logical step, the design and evaluation of remote capable pilot scale ACCs in a “hot” or radioactive environment was reported earlier. This reportmore » includes the development of remote designs for ACCs that can process the large throughput rates needed in future nuclear fuel recycling plants. Novel designs were developed for the remote interconnection of contactor units, clean-in-place and drain connections, and a new solids removal collection chamber. A three stage, 12.5 cm diameter rotor module has been constructed and evaluated for operational function and remote handling in highly radioactive environments. This design is scalable to commercial CINC ACC models from V-05 to V-20 with total throughput rates ranging from 20 to 650 liters per minute. The V-05R three stage prototype was manufactured by the commercial vendor for ACCs in the U.S., CINC mfg. It employs three standard V-05 clean-in-place (CIP) units modified for remote service and replacement via new methods of connection for solution inlets, outlets, drain and CIP. Hydraulic testing and functional checks were successfully conducted and then the prototype was evaluated for remote handling and maintenance suitability. Removal and replacement of the center position V-05R ACC unit in the three stage prototype was demonstrated using an overhead rail mounted PaR manipulator. This evaluation confirmed the efficacy of this innovative design for interconnecting and cleaning individual stages while retaining the benefits of commercially reliable ACC equipment for remote applications in the nuclear industry. Minor modifications and suggestions for improved manual remote servicing by the remote handling specialists were provided but successful removal and replacement was demonstrated in the first prototype.« less
Power subsystem automation study
NASA Technical Reports Server (NTRS)
Imamura, M. S.; Moser, R. L.; Veatch, M.
1983-01-01
Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Structator: fast index-based search for RNA sequence-structure patterns
2011-01-01
Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640
Pyglidein - A Simple HTCondor Glidein Service
NASA Astrophysics Data System (ADS)
Schultz, D.; Riedel, B.; Merino, G.
2017-10-01
A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.
A CityGML Extension for Handling Very Large Tins
NASA Astrophysics Data System (ADS)
Kumar, K.; Ledoux, H.; Stoter, J.
2016-10-01
In addition to buildings, the terrain forms an important part of a 3D city model. Although in GIS terrains are usually represented with 2D grids, TINs are also increasingly being used in practice. One example is 3DTOP10NL, the 3D city model covering the whole of the Netherlands, which stores the relief with a constrained TIN containing more than 1 billion triangles. Due to the massive size of such datasets, the main problem that arises is: how to efficiently store and maintain them? While CityGML supports the storage of TINs, we argue in this paper that the current solution is not adequate. For instance, the 1 billion+ triangles of 3DTOP10NL require 686 GB of storage space with CityGML. Furthermore, the current solution does not store the topological relationships of the triangles, and also there are no clear mechanisms to handle several LODs. We propose in this paper a CityGML extension for the compact representation of terrains. We describe our abstract and implementation specifications (modelled in UML), and our prototype implementation to convert TINs to our CityGML structure. It increases the topological relationships that are explicitly represented, and allows us to compress up to a factor of ∼ 25 in our experiments with massive real-world terrains (more than 1 billion triangles).
Mixed Oxide Fresh Fuel Package Auxiliary Equipment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yapuncich, F.; Ross, A.; Clark, R.H.
2008-07-01
The United States Department of Energy's National Nuclear Security Administration (NNSA) is overseeing the construction the Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF) on the Savannah River Site. The new facility, being constructed by NNSA's contractor Shaw AREVA MOX Services, will fabricate fuel assemblies utilizing surplus plutonium as feedstock. The fuel will be used in designated commercial nuclear reactors. The MOX Fresh Fuel Package (MFFP), which has recently been licensed by the Nuclear Regulatory Commission (NRC) as a type B package (USA/9295/B(U)F-96), will be utilized to transport the fabricated fuel assemblies from the MFFF to the nuclear reactors. It wasmore » necessary to develop auxiliary equipment that would be able to efficiently handle the high precision fuel assemblies. Also, the physical constraints of the MFFF and the nuclear power plants require that the equipment be capable of loading and unloading the fuel assemblies both vertically and horizontally. The ability to reconfigure the load/unload evolution builds in a large degree of flexibility for the MFFP for the handling of many types of both fuel and non fuel payloads. The design and analysis met various technical specifications including dynamic and static seismic criteria. The fabrication was completed by three major fabrication facilities within the United States. The testing was conducted by Sandia National Laboratories. The unique design specifications and successful testing sequences will be discussed. (authors)« less
Countering Insider Threats - Handling Insider Threats Using Dynamic, Run-Time Forensics
2007-10-01
able to handle the security policy requirements of a large organization containing many decentralized and diverse users, while being easily managed... contained in the TIF folder. Searching for any text string and sorting is supported also. The cache index file of Internet Explorer is not changed... containing thousands of malware software signatures. Separate datasets can be created for various classifications of malware such as encryption software
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
A Process Research Framework: The International Process Research Consortium
2006-12-01
projects ? 52 Theme P | IPRC Framework 5 P-30 How should a process for collaborative development be formulated? The development at different companies...requires some process for the actual collaboration . How should it be handled? P-31 How do we handle change? Requirements change during development ...source projects employ a single-site development model in which there is no large community of testers but rather a single-site small group
Yurtkuran, Alkın; Emel, Erdal
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
49 CFR 178.915 - General Large Packaging standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... Large Packagings intended for solid hazardous materials must be sift-proof and water-resistant. (b) All... materials, the internal pressure of the contents and the stresses of normal handling and transport. A Large... without gross distortion or failure and must be positioned so as to cause no undue stress in any part of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 731-TA-1189 (Final)] Large Power Transformers... transformers, provided for in subheading 8504.23.00 of the Harmonized Tariff Schedule of the United States.\\1... merchandise as ``large liquid dielectric power transformers (LPTs) having a top power handling capacity...
Shared-Ride Taxi Computer Control System Requirements Study
DOT National Transportation Integrated Search
1977-08-01
The technical problem of scheduling and routing shared-ride taxi service is so great that only computers can handle it efficiently. This study is concerned with defining the requirements of such a computer system. The major objective of this study is...
Code of Federal Regulations, 2013 CFR
2013-10-01
... ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE... affects the safety and/or health of post personnel, including the handling of hazardous materials, shall comply with the applicable requirements of the Department of State Safety/Health and Environmental...
Code of Federal Regulations, 2012 CFR
2012-10-01
... ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE... affects the safety and/or health of post personnel, including the handling of hazardous materials, shall comply with the applicable requirements of the Department of State Safety/Health and Environmental...
Procedure to Respond and Handle Personally Identifiable Information Incidents at EPA
This procedure seeks to assist Environmental Protection Agency Officials in conducting their duties in the event of a PII incident by providing practical guidance on responding to incidents effectively and efficiently via the use of a incident taxonomy.
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
Optimization of an organic memristor as an adaptive memory element
NASA Astrophysics Data System (ADS)
Berzina, Tatiana; Smerieri, Anteo; Bernabò, Marco; Pucci, Andrea; Ruggeri, Giacomo; Erokhin, Victor; Fontana, M. P.
2009-06-01
The combination of memory and signal handling characteristics of a memristor makes it a promising candidate for adaptive bioinspired information processing systems. This poses stringent requirements on the basic device, such as stability and reproducibility over a large number of training/learning cycles, and a large anisotropy in the fundamental control material parameter, in our case the electrical conductivity. In this work we report results on the improved performance of electrochemically controlled polymeric memristors, where optimization of a conducting polymer (polyaniline) in the active channel and better environmental control of fabrication methods led to a large increase both in the absolute values of the conductivity in the partially oxydized state of polyaniline and of the on-off conductivity ratio. These improvements are crucial for the application of the organic memristor to adaptive complex signal handling networks.
A multi target approach to control chemical reactions in their inhomogeneous solvent environment
NASA Astrophysics Data System (ADS)
Keefer, Daniel; Thallmair, Sebastian; Zauleck, Julius P. P.; de Vivie-Riedle, Regina
2015-12-01
Shaped laser pulses offer a powerful tool to manipulate molecular quantum systems. Their application to chemical reactions in solution is a promising concept to redesign chemical synthesis. Along this road, theoretical developments to include the solvent surrounding are necessary. An appropriate theoretical treatment is helpful to understand the underlying mechanisms. In our approach we simulate the solvent by randomly selected snapshots from molecular dynamics trajectories. We use multi target optimal control theory to optimize pulses for the various arrangements of explicit solvent molecules simultaneously. This constitutes a major challenge for the control algorithm, as the solvent configurations introduce a large inhomogeneity to the potential surfaces. We investigate how the algorithm handles the new challenges and how well the controllability of the system is preserved with increasing complexity. Additionally, we introduce a way to statistically estimate the efficiency of the optimized laser pulses in the complete thermodynamical ensemble.
NASA Astrophysics Data System (ADS)
Nawir, Mukrimah; Amir, Amiza; Lynn, Ong Bi; Yaakob, Naimah; Badlishah Ahmad, R.
2018-05-01
The rapid growth of technologies might endanger them to various network attacks due to the nature of data which are frequently exchange their data through Internet and large-scale data that need to be handle. Moreover, network anomaly detection using machine learning faced difficulty when dealing the involvement of dataset where the number of labelled network dataset is very few in public and this caused many researchers keep used the most commonly network dataset (KDDCup99) which is not relevant to employ the machine learning (ML) algorithms for a classification. Several issues regarding these available labelled network datasets are discussed in this paper. The aim of this paper to build a network anomaly detection system using machine learning algorithms that are efficient, effective and fast processing. The finding showed that AODE algorithm is performed well in term of accuracy and processing time for binary classification towards UNSW-NB15 dataset.
FRASS: the web-server for RNA structural comparison
2010-01-01
Background The impressive increase of novel RNA structures, during the past few years, demands automated methods for structure comparison. While many algorithms handle only small motifs, few techniques, developed in recent years, (ARTS, DIAL, SARA, SARSA, and LaJolla) are available for the structural comparison of large and intact RNA molecules. Results The FRASS web-server represents a RNA chain with its Gauss integrals and allows one to compare structures of RNA chains and to find similar entries in a database derived from the Protein Data Bank. We observed that FRASS scores correlate well with the ARTS and LaJolla similarity scores. Moreover, the-web server can also reproduce satisfactorily the DARTS classification of RNA 3D structures and the classification of the SCOR functions that was obtained by the SARA method. Conclusions The FRASS web-server can be easily used to detect relationships among RNA molecules and to scan efficiently the rapidly enlarging structural databases. PMID:20553602
Imaging samples larger than the field of view: the SLS experience
NASA Astrophysics Data System (ADS)
Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco
2017-06-01
Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starr, D. L.; Wozniak, P. R.; Vestrand, W. T.
2002-01-01
SkyDOT (Sky Database for Objects in Time-Domain) is a Virtual Observatory currently comprised of data from the RAPTOR, ROTSE I, and OGLE I1 survey projects. This makes it a very large time domain database. In addition, the RAPTOR project provides SkyDOT with real-time variability data as well as stereoscopic information. With its web interface, we believe SkyDOT will be a very useful tool for both astronomers, and the public. Our main task has been to construct an efficient relational database containing all existing data, while handling a real-time inflow of data. We also provide a useful web interface allowing easymore » access to both astronomers and the public. Initially, this server will allow common searches, specific queries, and access to light curves. In the future we will include machine learning classification tools and access to spectral information.« less
Blade size and weight effects in shovel design.
Freivalds, A; Kim, Y J
1990-03-01
The shovel is a basic tool that has undergone only nominal systematic design changes. Although previous studies found shovel-weight and blade-size effects of shovelling, the exact trade-off between the two has not been quantified. Energy expenditure, heart rate, ratings of perceived exertion and shovelling performance were measured on five subjects using five shovels with varying blade sizes and weights to move sand. Energy expenditure, normalised to subject weight and load handled, varied quadratically with the blade-size/shovel-weight (B/W) ratio. Minimum energy cost was at B/W = 0.0676 m2/kg, which for an average subject and average load would require an acceptable 5.16 kcal/min of energy expenditure. Subjects, through the ratings of perceived exertion, also strongly preferred the lighter shovels without regard to blade size. Too large a blade or too heavy a shovel increased energy expenditure beyond acceptable levels, while too small a blade reduced efficiency of the shovelling.
An efficient approach to integrated MeV ion imaging.
Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M
2018-03-01
An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.
FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation
NASA Astrophysics Data System (ADS)
Veltri, M.
2016-09-01
This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Al-Jedai, Ahmed H.; Algain, Roaa A.; Alghamidi, Said A.; Al-Jazairi, Abdulrazaq S.; Amin, Rashid; Bin Hussain, Ibrahim Z.
2017-01-01
Purpose In the last few decades, changes to formulary management processes have taken place in institutions with closed formulary systems. However, many P&T committees continued to operate using traditional paper-based systems. Paper-based systems have many limitations, including confidentiality, efficiency, open voting, and paper wastage. This becomes more challenging when dealing with a multisite P&T committee that handles formulary matters across the whole health care system. In this paper, we discuss the implementation of the first paperless, completely electronic, Web-based formulary management system across a large health care system in the Middle East. Summary We describe the transitioning of a multisite P&T committee in a large tertiary care institution from a paper-based to an all-electronic system. The challenges and limitations of running a multisite P&T committee utilizing a paper system are discussed. The design and development of a Web-based committee floor management application that can be used from notebooks, tablets, and hand-held devices is described. Implementation of a flexible, interactive, easy-to-use, and efficient electronic formulary management system is explained in detail. Conclusion The development of an electronic P&T committee meeting system that encompasses electronic document sharing, voting, and communication could help multisite health care systems unify their formularies across multiple sites. Our experience might not be generalizable to all institutions because this depends heavily on system features, existing processes and workflow, and implementation across different sites. PMID:29018301
Automatic Sea Bird Detection from High Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
Mader, S.; Grenzdörffer, G. J.
2016-06-01
Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.
Al-Jedai, Ahmed H; Algain, Roaa A; Alghamidi, Said A; Al-Jazairi, Abdulrazaq S; Amin, Rashid; Bin Hussain, Ibrahim Z
2017-10-01
In the last few decades, changes to formulary management processes have taken place in institutions with closed formulary systems. However, many P&T committees continued to operate using traditional paper-based systems. Paper-based systems have many limitations, including confidentiality, efficiency, open voting, and paper wastage. This becomes more challenging when dealing with a multisite P&T committee that handles formulary matters across the whole health care system. In this paper, we discuss the implementation of the first paperless, completely electronic, Web-based formulary management system across a large health care system in the Middle East. We describe the transitioning of a multisite P&T committee in a large tertiary care institution from a paper-based to an all-electronic system. The challenges and limitations of running a multisite P&T committee utilizing a paper system are discussed. The design and development of a Web-based committee floor management application that can be used from notebooks, tablets, and hand-held devices is described. Implementation of a flexible, interactive, easy-to-use, and efficient electronic formulary management system is explained in detail. The development of an electronic P&T committee meeting system that encompasses electronic document sharing, voting, and communication could help multisite health care systems unify their formularies across multiple sites. Our experience might not be generalizable to all institutions because this depends heavily on system features, existing processes and workflow, and implementation across different sites.
Novel Binders and Methods for Agglomeration of Ore
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. K. Kawatra; T. C. Eisele; K. A. Lewandowski
2006-03-31
Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily at a reasonable cost. A primary example of this is copper heap leaching, where there are no binders currently encountered in this acidic environment process. As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching. The active involvement of our industrial partners will help to ensure rapid commercialization of any agglomeration technologies developed by this project.« less
Novel Binders and Methods for Agglomeration of Ore
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. K. Kawatra; T. C. Eisele; J. A. Gurtler
2005-09-30
Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily at a reasonable cost. A primary example of this is copper heap leaching, where there are no binders currently encountered in this acidic environment process. As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching. The active involvement of our industrial partners will help to ensure rapid commercialization of any agglomeration technologies developed by this project.« less
NASA Astrophysics Data System (ADS)
Kim, Jeong-Gyu; Kim, Woong-Tae; Ostriker, Eve C.; Skinner, M. Aaron
2017-12-01
We present an implementation of an adaptive ray-tracing (ART) module in the Athena hydrodynamics code that accurately and efficiently handles the radiative transfer involving multiple point sources on a three-dimensional Cartesian grid. We adopt a recently proposed parallel algorithm that uses nonblocking, asynchronous MPI communications to accelerate transport of rays across the computational domain. We validate our implementation through several standard test problems, including the propagation of radiation in vacuum and the expansions of various types of H II regions. Additionally, scaling tests show that the cost of a full ray trace per source remains comparable to that of the hydrodynamics update on up to ∼ {10}3 processors. To demonstrate application of our ART implementation, we perform a simulation of star cluster formation in a marginally bound, turbulent cloud, finding that its star formation efficiency is 12% when both radiation pressure forces and photoionization by UV radiation are treated. We directly compare the radiation forces computed from the ART scheme with those from the M1 closure relation. Although the ART and M1 schemes yield similar results on large scales, the latter is unable to resolve the radiation field accurately near individual point sources.