NASA Astrophysics Data System (ADS)
Piparo, Danilo; Innocente, Vincenzo; Hauth, Thomas
2014-06-01
During the first years of data taking at the Large Hadron Collider (LHC), the simulation and reconstruction programs of the experiments proved to be extremely resource consuming. In particular, for complex event simulation and reconstruction applications, the impact of evaluating elementary functions on the runtime is sizeable (up to one fourth of the total), with an obvious effect on the power consumption of the hardware dedicated to their execution. This situation clearly needs improvement, especially considering the even more demanding data taking scenarios after the first LHC long shut down. A possible solution to this issue is the VDT (VectorisD maTh) mathematical library. VDT provides the most common mathematical functions used in HEP in an open source product. The function implementations are fast, can be inlined, provide an approximate accuracy and are usable in vectorised loops. Their implementation is portable across platforms: x86 and ARM processors, Xeon Phi coprocessors and GPGPUs. In this contribution, we describe the features of the VDT mathematical library, showing significant speedups with respect to the LibM library and comparable accuracies. Moreover, taking as examples simulation and reconstruction workflows in production by the LHC experiments, we show the benefits of the usage of VDT in terms of runtime reduction and stability of physics output.
Teseo: A vectoriser of historical seismograms
NASA Astrophysics Data System (ADS)
Pintore, Stefano; Quintiliani, Matteo; Franceschi, Diego
2005-12-01
Historical seismograms contain a rich harvest of information useful for the study of past earthquakes. It is necessary to extract this information by digitising the analogue records if modern analysis is required. Teseo has been developed for quick and accurate digitisation of seismogram traces from raster files, introducing a vectorisation step based on piecewise cubic Bézier curves. The vectoriser can handle greyscale images stored in a suitable file format and it offers three concurrent vectorisation methods: manual, automatic by colour selection, and automatic by neural networks. The software that implements the methods described is distributed with open source license.
VLSI Implementation Of The Fast Fourier Transform
NASA Astrophysics Data System (ADS)
Chau, Paul M.; Ku, Walter H.
1986-03-01
A VLSI implementation of a Fast Fourier Transform (FFT) processor consisting of a mesh interconnection of complex floating-point butterfly units is presented. The Cooley-Tukey radix-2 Decimation-In-Frequency (DIF) formulation of the FFT was chosen since it offered the best overall compromise between the need for fast and efficient algorithmic computation and the need for a structure amenable to VLSI layout. Thus the VLSI implementation is modular, regular, expandable to various problem sizes and has a simple systolic flow of data and control. To evaluate the FFT architecture, VLSI area-time complexity concepts are used, but are now adapted to a complex floating-point number system rather than the usual integer ring representation. We show by our construction that the Thompson area-time optimum bound for the VLSI computation of an N-point FFT, area-time2oc = ORNlogN)1+a] can be attained by an alternative number representation, and hence the theoretical bound is a tight bound regardless of number system representation.
A Fast Implementation of the ISOCLUS Algorithm
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline
2003-01-01
Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O
Implementation and analysis of a fast backprojection algorithm
NASA Astrophysics Data System (ADS)
Gorham, LeRoy A.; Majumder, Uttam K.; Buxa, Peter; Backues, Mark J.; Lindgren, Andrew C.
2006-05-01
The convolution backprojection algorithm is an accurate synthetic aperture radar imaging technique, but it has seen limited use in the radar community due to its high computational costs. Therefore, significant research has been conducted for a fast backprojection algorithm, which surrenders some image quality for increased computational efficiency. This paper describes an implementation of both a standard convolution backprojection algorithm and a fast backprojection algorithm optimized for use on a Linux cluster and a field-programmable gate array (FPGA) based processing system. The performance of the different implementations is compared using synthetic ideal point targets and the SPIE XPatch Backhoe dataset.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
Vectorised simulation of the response of a time projection chamber
NASA Astrophysics Data System (ADS)
Georgiopoulos, C. H.; Mermikides, M. E.
1989-12-01
A Monte Carlo code used for the detailed simulation of the response of the ALEPH time projection chamber has been successfully restructured to exploit the vector architectures of the CDC CYBER-205, ETA10 and CRAY X-MP supercomputers. Some aspects of the vector implementation are discussed and the performance on the various processors is compared.
GPU implementations for fast factorizations of STAP covariance matrices
NASA Astrophysics Data System (ADS)
Roeder, Michael; Davis, Nolan; Furtek, Jeremy; Braunreiter, Dennis; Healy, Dennis
2008-08-01
One of the main goals of the STAP-BOY program has been the implementation of a space-time adaptive processing (STAP) algorithm on graphics processing units (GPUs) with the goal of reducing the processing time. Within the context of GPU implementation, we have further developed algorithms that exploit data redundancy inherent in particular STAP applications. Integration of these algorithms with GPU architecture is of primary importance for fast algorithmic processing times. STAP algorithms involve solving a linear system in which the transformation matrix is a covariance matrix. A standard method involves estimating a covariance matrix from a data matrix, computing its Cholesky factors by one of several methods, and then solving the system by substitution. Some STAP applications have redundancy in successive data matrices from which the covariance matrices are formed. For STAP applications in which a data matrix is updated with the addition of a new data row at the bottom and the elimination of the oldest data in the top of the matrix, a sequence of data matrices have multiple rows in common. Two methods have been developed for exploiting this type of data redundancy when computing Cholesky factors. These two methods are referred to as 1) Fast QR factorizations of successive data matrices 2) Fast Cholesky factorizations of successive covariance matrices. We have developed GPU implementations of these two methods. We show that these two algorithms exhibit reduced computational complexity when compared to benchmark algorithms that do not exploit data redundancy. More importantly, we show that when these algorithmic improvements are optimized for the GPU architecture, the processing times of a GPU implementation of these matrix factorization algorithms may be greatly improved.
Implementation and parallelization of fast matrix multiplication for a fast Legendre transform
Chen, Wentao
1993-09-01
An algorithm was presented by Alpert and Rokhlin for the rapid evaluation of Legendre transforms. The fast algorithm can be expressed as a matrix-vector product followed by a fast cosine transform. Using the Chebyshev expansion to approximate the entries of the matrix and exchanging the order of summations reduces the time complexity of computation from O(n{sup 2}) to O(n log n), where n is the size of the input vector. Our work has been focused on the implementation and the parallelization of the fast algorithm of matrix-vector product. Results have shown the expected performance of the algorithm. Precision problems which arise as n becomes large can be resolved by doubling the precision of the calculation.
Maintenance implementation plan for the Fast Flux Test Facility
Boyd, J.A.
1997-01-30
This plan implements the U.S. Department of Energy (DOE) 4330.4B, Maintenance Management Program (1994), at the Fast Flux Test Facility (FFTF). The FFTF is a research and test reactor located near Richland, Washington, and is operated under contract for the DOE by the B&W Hanford Company (BWHC). The intent of this Maintenance Implementation Plan (MIP) is to describe the manner in which the activities of the maintenance function are executed and controlled at the FFTF and how this compares to the requirements of DOE 4330.4B. The MIP ii a living document that is updated through a Facility Maintenance Self- Assessment Program. During the continuing self-assessment program, any discrepancies found are resolved to meet DOE 4330.4B requirements and existing practices. The philosophy of maintenance management at the FFTF is also describe within this MIP. This MIP has been developed based on information obtained from various sources including the following: * A continuing self-assessment against the requirements of the Conduct of Maintenance Order * In-depth reviews conducted by the members of the task team that assembled this MIP * Inputs from routine audits and appraisals conducted at the facility The information from these sources is used to identify those areas in which improvements could be made in the manner in which the facility conducts maintenance activities. The action items identified in Rev. 1 of the MIP have been completed. The MIP is arranged in six sections. Section I is this Executive Summary. Section 2 describes the facility and its 0683 history. Section 3 describes the philosophy of the graded approach and how it is applied at FFTF. Section 3 also discusses the strategy and the basis for the prioritizing resources. Section 4 contains the detailed discussion of `the elements of DOE 4330.4B and their state of implementation. Section 5 is for waivers and requested deviations from the requirements of the order. Section 6 contains a copy of the Maintenance
Outline of a fast hardware implementation of Winograd's DFT algorithm
NASA Technical Reports Server (NTRS)
Zohar, S.
1980-01-01
The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.
FPGA Implementation of Highly Modular Fast Universal Discrete Transforms
NASA Astrophysics Data System (ADS)
Potipantong, Panan; Sirisuk, Phaophak; Oraintara, Soontorn; Worapishet, Apisak
This paper presents an FPGA implementation of highly modular universal discrete transforms. The implementation relies upon the unified discrete Fourier Hartley transform (UDFHT), based on which essential sinusoidal transforms including discrete Fourier transform (DFT), discrete Hartley transform (DHT), discrete cosine transform (DCT) and discrete sine transform (DST) can be realized. It employs a reconfigurable, scalable and modular architecture that consists of a memory-based FFT processor equipped with pre- and post-processing units. Besides, a pipelining technique is exploited to seamlessly harmonize the operation between each sub-module. Experimental results based on Xilinx Virtex-II Pro are given to examine the performance of the proposed UDFHT implementation. Two practical applications are also shown to demonstrate the flexibility and modularity of the proposed work.
[Fast Implementation Method of Protein Spots Detection Based on CUDA].
Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong
2016-02-01
In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms. PMID:27382745
NASA Astrophysics Data System (ADS)
Asztalos, Stephen J.; Hennig, Wolfgang; Warburton, William K.
2016-01-01
Pulse shape discrimination applied to certain fast scintillators is usually performed offline. In sufficiently high-event rate environments data transfer and storage become problematic, which suggests a different analysis approach. In response, we have implemented a general purpose pulse shape analysis algorithm in the XIA Pixie-500 and Pixie-500 Express digital spectrometers. In this implementation waveforms are processed in real time, reducing the pulse characteristics to a few pulse shape analysis parameters and eliminating time-consuming waveform transfer and storage. We discuss implementation of these features, their advantages, necessary trade-offs and performance. Measurements from bench top and experimental setups using fast scintillators and XIA processors are presented.
Maintenance Implementation Plan for the Fast Flux Test Facility
Crawford, C.N.; Duffield, M.F.
1992-06-01
The maintenance program for the 400 Area, Fast Flux Test Facility (FFTF)Plant and plant support facilities includes the reactor plant, reactor support systems and equipment, Maintenance and Storage Facility, plant buildings, and building support systems. These are the areas of the facility that are covered by this plan. The personnel support facilities and buildings are maintained and supported by another department within Westinghouse Hanford, and are not included here. The FFTF maintenance program conducts the corrective and preventive maintenance necessary to ensure the operational reliability and safety of the reactor plant and support equipment. This comprehensive maintenance program also provides for maximizing the useful life of plant equipment and systems to realize the most efficient possible use of resources. The long-term future of the FFTF is uncertain; in the near term, the facility is being placed in standby. As the plant transitions from operating status to standby, the scope of the maintenance program will change from one of reactor operational reliability and life extension to preservation.
Fast, parallel implementation of particle filtering on the GPU architecture
NASA Astrophysics Data System (ADS)
Gelencsér-Horváth, Anna; Tornai, Gábor János; Horváth, András; Cserey, György
2013-12-01
In this paper, we introduce a modified cellular particle filter (CPF) which we mapped on a graphics processing unit (GPU) architecture. We developed this filter adaptation using a state-of-the art CPF technique. Mapping this filter realization on a highly parallel architecture entailed a shift in the logical representation of the particles. In this process, the original two-dimensional organization is reordered as a one-dimensional ring topology. We proposed a proof-of-concept measurement on two models with an NVIDIA Fermi architecture GPU. This design achieved a 411- μs kernel time per state and a 77-ms global running time for all states for 16,384 particles with a 256 neighbourhood size on a sequence of 24 states for a bearing-only tracking model. For a commonly used benchmark model at the same configuration, we achieved a 266- μs kernel time per state and a 124-ms global running time for all 100 states. Kernel time includes random number generation on the GPU with curand. These results attest to the effective and fast use of the particle filter in high-dimensional, real-time applications.
A fast portable implementation of the Secure Hash Algorithm, III.
McCurley, Kevin S.
1992-10-01
In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks. PMID:24808564
NASA Astrophysics Data System (ADS)
Scheins, J. J.; Vahedipour, K.; Pietrzyk, U.; Shah, N. J.
2015-12-01
For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation
Morończyk, Daniel Antoni F; Krasnodębski, Ireneusz Wojciech
2011-09-01
A perioperative care in the colorectal surgery has been considerably changed recently. The fast track surgery decreases complications rate, shortens length of stay, improves quality of life and leads to cost reduction. It is achieved by: resignation of a mechanical bowel preparation before and a nasogastric tube insertion after operation, optimal pain and intravenous fluid management, an early rehabilitation, enteral nutrition and removal of a vesical catheter and abdominal drain if used.The aim of the study was to compare the results of an implementation the fast track surgery protocol with results achieving in the conventional care regimen.Material and methods. Two groups of patients undergoing colonic resection have been compared. The study group was formed by patients treated with fast track concept, the control group - by patients who were dealt with hitherto regimen. Procedures needed stoma performing, rectal and laparoscopic surgery were excluded. The perioperative period was investigated by telephone call to patient or his family.Results. Statistical significant reduction was reached in a favour of the fast track group in the following parameters: the length of hospital stay (2.5 days shorter), duration of an abdominal cavity and vesicle drainage (3 and 2 days shorter respectively), postoperative day on which oral diet was implemented (2,5 days faster) and finally extended (1.5 days faster). There were no statistical difference in mortality, morbidity neither reoperation rate between two groups.Conclusion. The fast track surgery is a safe strategy and may improve a perioperative care. PMID:22166736
Fast implementation of length-adaptive privacy amplification in quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chun-Mei; Li, Mo; Huang, Jing-Zheng; Patcharapong, Treeviriyanupab; Li, Hong-Wei; Li, Fang-Yi; Wang, Chuan; Yin, Zhen-Qiang; Chen, Wei; Keattisak, Sripimanwat; Han, Zhen-Fu
2014-09-01
Post-processing is indispensable in quantum key distribution (QKD), which is aimed at sharing secret keys between two distant parties. It mainly consists of key reconciliation and privacy amplification, which is used for sharing the same keys and for distilling unconditional secret keys. In this paper, we focus on speeding up the privacy amplification process by choosing a simple multiplicative universal class of hash functions. By constructing an optimal multiplication algorithm based on four basic multiplication algorithms, we give a fast software implementation of length-adaptive privacy amplification. “Length-adaptive” indicates that the implementation of privacy amplification automatically adapts to different lengths of input blocks. When the lengths of the input blocks are 1 Mbit and 10 Mbit, the speed of privacy amplification can be as fast as 14.86 Mbps and 10.88 Mbps, respectively. Thus, it is practical for GHz or even higher repetition frequency QKD systems.
Using single buffers and data reorganization to implement a multi-megasample fast Fourier transform
NASA Technical Reports Server (NTRS)
Brown, R. D.
1992-01-01
Data ordering in large fast Fourier transforms (FFT's) is both conceptually and implementationally difficult. Discribed here is a method of visualizing data orderings as vectors of address bits, which enables the engineer to use more efficient data orderings and reduce double-buffer memory designs. Also detailed are the difficulties and algorithmic solutions involved in FFT lengths up to 4 megasamples (Msamples) and sample rates up to 80 MHz.
NASA Astrophysics Data System (ADS)
Ma, Jun; Parhi, Keshab K.; Hekstra, Gerben J.; Deprettere, Ed F. A.
1998-10-01
CORDIC based IIR digital filters are orthogonal filters whose internal computations consist of orthogonal transformations. These filters possess desirable properties for VLSI implementations such as regularity, local connection, low sensitivity to finite word-length implementation, and elimination of limit cycles. Recently, fine-grain pipelined CORDIC based IIR digital filter architectures which can perform the filtering operations at arbitrarily high sample rates at the cost of linear increase in hardware complexity have been developed. These pipelined architectures consists of only Givens rotations and a few additions which can be mapped onto CORDIC arithmetic based processors. However, in practical applications, implementations of GIvens rotations using traditional CORDIC arithmetic are quite expensive. For example, for 16 bit accuracy, using floating point data format with 16 bit mantissa and 5 bit exponent, it will require approximately 20 pairs of shift-add operations for one Givens rotation. In this paper, we propose an efficient implementation of pipelined CORDIC based IIR digital filters based on fast orthonormal (mu) -rotations. Using this method, the Givens rotations are approximated by angel corresponding to orthonormal (mu) -rotations, which are based on the idea of CORDIC and can perform rotation with minimal number of shift-add operations. We present various methods of construction for such orthonormal (mu) -rotations. A significant reduction of the number of required shift-add operations is achieved. All types of fast rotations can be implemented as a cascade of only four basic types of shift-add stages. These stages can be executed on a modified floating-point CORDIC architecture, making the pipelined filter highly suitable for VLSI implementations.
Airborne Demonstration of FPGA Implementation of Fast Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Keymeulen, D.; Aranki, N.; Bakhshi, A.; Luong, H.; Sartures, C.; Dolman, D.
2014-01-01
Efficient on-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware.
Fast focal zooming scheme for direct drive fusion implemented by inserting KD2PO4 crystal
NASA Astrophysics Data System (ADS)
Zhong, Zheqiang; Hu, Xiaochuan; Zhang, Bin
2016-06-01
The highly required uniformity of target in direct-drive fusion is difficult to achieve and maintain during the entire laser fusion implosion. To mitigate the increasing nonuniformity, the fast focal zooming scheme implemented by inserting an electro-optic (EO) crystal in the front end of beamline has been proposed. Functioning as a phase plate, the specifically designed EO crystal may add the induced spherical wavefront to the laser beam and alter its focusing characteristics. However, in order to zoom out the focal spot by half, the required voltage for KD2PO4 (DKDP) with single pair of electrodes is relatively high. In order to decrease the voltage while maintaining the zooming performance, the DKDP cylinder with multi pairs of electrodes has been presented. The continuous phase plate (CPP) is designed according to both the injected beam and the output field. However, the conventional CPP is designed based on the assumption of an injected beam without wavefront distortion, which would zoom in the focal spot variation in the focal zooming scheme. In order to zoom out the focal spot, a redesigned CPP has been proposed by adding a spherical wavefront to the phase variation of the conventional CPP and further optimizing. On the basis, the focusing characteristics of laser beam during the fast focal zooming process have been analyzed. Results indicate that the focal spot size decreases with the increasing voltage on DKDP crystal, meanwhile the uniformity maintains high during the focal zooming process.
Fast implementation of Oliker's ellipses technology to build free form reflector
NASA Astrophysics Data System (ADS)
Magarill, S.
2013-09-01
The field of illumination optics has a number of applications where using free-form reflective surfaces to create a required light distribution can be beneficial. Oliker's concept of combining elliptical surfaces is the foundation of forming a reflector for an arbitrary illuminance distribution. The algorithm for fast implementation of this concept is discussed in detail. It is based on an analytical computation of a 3D cloud of points in order to map the reflector shape with the required flux distribution. Flux delivered to chosen zones across the target can be calculated based on the number of associated cloud points and its locations. This allows optimized ellipse parameters to achieve the required flux distribution without raytracing through the reflector geometry. Such a strictly analytical optimization is much faster than building reflector geometry and raytracing each step of the optimization. A generated 3D cloud of points can be used with a standard SolidWorks feature to build the loft surface. This surface consists of adjacent elliptical facets and should be smooth to maintain continuous irradiance across the target. A secondary operation to smooth the surface profile between elliptical facets is discussed. Examples of proposed algorithm implementations are presented.
Implementation of moire-schlieren deflectometry on a small scale fast capillary plasma discharge
Valenzuela, J. C.; Wyndham, E. S.; Chuaqui, H.; Cortes, D. S.; Favre, M.; Bhuyan, H.
2012-05-15
We present the results of an implementation of a refractive diagnostic to study fast dynamics in capillary discharges. It consists of a moire-schlieren deflectometry technique that provides a quantitative analysis of the refractive index gradients. The technique is composed of an angular deflection mapping system (moire deflectometry) and a spatial Fourier filter (schlieren). Temporal resolution of 12 ps, 50 {mu}m of spatial resolution and minimum detectable gradient of ({nabla}n{sub e}){sub min}=6x10{sup 18}cm{sup -4} were obtained. With these parameters, a large aspect ratio capillary discharge of 15 ns half period current was studied; the diagnostic was implemented axially along the alumina tube of 1.6 mm inner diameter and 21 mm length. The detectable electron density for these conditions was 1x10{sup 17}cm{sup -3}. From the interpretation of the fringe displacement, we are able to measure the velocity of the radial compression wave and the compression ratio due to the Lorentz force. On axis, electron densities of the order of 5x10{sup 17}cm{sup -3} were obtained at the time of maximum soft x-ray emission.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing
A fast implementation of the incremental backprojection algorithms for parallel beam geometries
Chen, C.M.; Wang, C.Y.; Cho, Z.H.
1996-12-01
Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N{sup 2}) multiplications in contrast to O(N{sup 2}) and O(N{sup 3}) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested.
Sun, Yan V.; Jacobsen, Douglas M.; Turner, Stephen T.; Boerwinkle, Eric; Kardia, Sharon L.R.
2009-01-01
In order to take into account the complex genomic distribution of SNP variations when identifying chromosomal regions with significant SNP effects, a single nucleotide polymorphism (SNP) association scan statistic was developed. To address the computational needs of genome wide association (GWA) studies, a fast Java application, which combines single-locus SNP tests and a scan statistic for identifying chromosomal regions with significant clusters of significant SNP effects, was developed and implemented. To illustrate this application, SNP associations were analyzed in a pharmacogenomic study of the blood pressure lowering effect of thiazide-diuretics (N=195) using the Affymetrix Human Mapping 100K Set. 55,335 tagSNPs (pair-wise linkage disequilibrium R2<0.5) were selected to reduce the frequency correlation between SNPs. A typical workstation can complete the whole genome scan including 10,000 permutation tests within 3 hours. The most significant regions locate on chromosome 3, 6, 13 and 16, two of which contain candidate genes that may be involved in the underlying drug response mechanism. The computational performance of ChromoScan-GWA and its scalability were tested with up to 1,000,000 SNPs and up to 4,000 subjects. Using 10,000 permutations, the computation time grew linearly in these datasets. This scan statistic application provides a robust statistical and computational foundation for identifying genomic regions associated with disease and provides a method to compare GWA results even across different platforms. PMID:20161066
Detector-device-independent quantum key distribution: Security analysis and fast implementation
NASA Astrophysics Data System (ADS)
Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; Boso, Gianluca; Lim, Charles Ci Wen; Martin, Anthony; Zbinden, Hugo
2016-08-01
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant "time-reversal" QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. However, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) has been proposed to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. Here, we analyze the security of DDI-QKD and elucidate its security assumptions. We find that DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.
Webb, Thomas L; Sheeran, Paschal; Pepper, John
2012-03-01
The present research investigated whether forming implementation intentions could promote fast responses to attitude-incongruent associations (e.g., woman-manager) and thereby modify scores on popular implicit measures of attitude. Expt 1 used the Implicit Association Test (IAT) to measure associations between gender and science versus liberal arts. Planning to associate women with science engendered fast responses to this category-attribute pairing and rendered summary scores more neutral compared to standard IAT instructions. Expt 2 demonstrated that forming egalitarian goal intentions is not sufficient to produce these effects. Expt 3 extended these findings to a different measure of implicit attitude (the Go/No-Go Association Task) and a different stereotypical association (Muslims-terrorism). In Expt 4, managers who planned to associate women with superordinate positions showed more neutral IAT scores relative to non-planners and effects were maintained 3 weeks later. In sum, implementation intentions enable people to gain control over implicit attitude responses. PMID:22435844
ERIC Educational Resources Information Center
Edgecombe, Nikki; Jaggars, Shanna Smith; Baker, Elaine DeLott; Bailey, Thomas
2013-01-01
Originally designed for students who test into at least two levels of developmental education in a particular subject area, FastStart is a compressed course program model launched in 2005 at the Community College of Denver (CCD). The program combines multiple semester-length courses into a single intensive semester, while providing case…
ERIC Educational Resources Information Center
Holland, Rochelle
2006-01-01
The purpose of this case study was to examine the efficacy of a developing technique that has been coined by this author as a Mental Wellness Fast. Therefore, this paper has been written for therapists who utilize spirituality and/or religion into their practice. The technique is geared to assist clients with healing poor inner dialogues and it…
Clisby, Nathan
2010-02-01
We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773
2009-01-01
In 1990, the Fast Track Project was initiated to evaluate the feasibility and effectiveness of a comprehensive, multicomponent prevention program targeting children at risk for conduct disorders in four demographically diverse American communities (Conduct Problems Prevention Research Group [CPPRG], 1992). Representing a prevention science approach toward community-based preventive intervention, the Fast Track intervention design was based upon the available data base elucidating the epidemiology of risk for conduct disorder and suggesting key causal developmental influences (R. P. Weissberg & M. T. Greenberg, 1998). Critical questions about this approach to prevention center around the extent to which such a science-based program can be effective at (1) engaging community members and stakeholders, (2) maintaining intervention fidelity while responding appropriately to the local norms and needs of communities that vary widely in their demographic and cultural/ethnic composition, and (3) maintaining community engagement in the long-term to support effective and sustainable intervention dissemination. This paper discusses these issues, providing examples from the Fast Track project to illustrate the process of program implementation and the evidence available regarding the success of this science-based program at engaging communities in sustainable and effective ways as partners in prevention programming. PMID:11930968
NASA Astrophysics Data System (ADS)
Sotiropoulou, C. L.; Gkaitatzis, S.; Annovi, A.; Beretta, M.; Kordas, K.; Nikolaidis, S.; Petridou, C.; Volpi, G.
2014-10-01
The parallel 2D pixel clustering FPGA implementation used for the input system of the ATLAS Fast TracKer (FTK) processor is presented. The input system for the FTK processor will receive data from the Pixel and micro-strip detectors from inner ATLAS read out drivers (RODs) at full rate, for total of 760Gbs, as sent by the RODs after level-1 triggers. Clustering serves two purposes, the first is to reduce the high rate of the received data before further processing, the second is to determine the cluster centroid to obtain the best spatial measurement. For the pixel detectors the clustering is implemented by using a 2D-clustering algorithm that takes advantage of a moving window technique to minimize the logic required for cluster identification. The cluster detection window size can be adjusted for optimizing the cluster identification process. Additionally, the implementation can be parallelized by instantiating multiple cores to identify different clusters independently thus exploiting more FPGA resources. This flexibility makes the implementation suitable for a variety of demanding image processing applications. The implementation is robust against bit errors in the input data stream and drops all data that cannot be identified. In the unlikely event of missing control words, the implementation will ensure stable data processing by inserting the missing control words in the data stream. The 2D pixel clustering implementation is developed and tested in both single flow and parallel versions. The first parallel version with 16 parallel cluster identification engines is presented. The input data from the RODs are received through S-Links and the processing units that follow the clustering implementation also require a single data stream, therefore data parallelizing (demultiplexing) and serializing (multiplexing) modules are introduced in order to accommodate the parallelized version and restore the data stream afterwards. The results of the first hardware tests of
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2015-09-01
We present WHFAST, a fast and accurate implementation of a Wisdom-Holman symplectic integrator for long-term orbit integrations of planetary systems. WHFAST is significantly faster and conserves energy better than all other Wisdom-Holman integrators tested. We achieve this by significantly improving the Kepler solver and ensuring numerical stability of coordinate transformations to and from Jacobi coordinates. These refinements allow us to remove the linear secular trend in the energy error that is present in other implementations. For small enough timesteps, we achieve Brouwer's law, i.e. the energy error is dominated by an unbiased random walk due to floating-point round-off errors. We implement symplectic correctors up to order 11 that significantly reduce the energy error. We also implement a symplectic tangent map for the variational equations. This allows us to efficiently calculate two widely used chaos indicators the Lyapunov characteristic number and the Mean Exponential Growth factor of Nearby Orbits. WHFAST is freely available as a flexible C package, as a shared library, and as an easy-to-use PYTHON module.
NASA Astrophysics Data System (ADS)
Rangelov, A. A.
2009-01-01
Truncated Fourier, Gauss, Kummer and exponential sums can be used to factorize numbers: for a factor these sums equal unity in absolute value, whereas they nearly vanish for any other number. We show how this factorization algorithm can emerge from superpositions of classical light waves and we present a number of simple implementations in optics.
Tuomas, V.; Jaakko, L.
2013-07-01
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)
Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.
2008-01-01
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.
2012-01-10
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
NASA Astrophysics Data System (ADS)
Jayarajan, Jayesh; Kumar, Nishant; Verma, Amarnath; Thaker, Ramkrishna
2016-05-01
Drive electronics for generating fast, bipolar clocks, which can drive capacitive loads of the order of 5-10nF are indispensable for present day Charge Coupled Devices (CCDs). Design of these high speed bipolar clocks is challenging because of the capacitive loads that have to be driven and a strict constraint on the rise and fall times. Designing drive electronics circuits for space applications becomes even more challenging due to limited number of available discrete devices, which can survive in the harsh radiation prone space environment. This paper presents the design, simulations and test results of a set of such high speed, bipolar clock drivers. The design has been tested under a thermal cycle of -15 deg C to +55 deg C under vacuum conditions and has been designed using radiation hardened components. The test results show that the design meets the stringent rise/fall time requirements of 50+/-10ns for Multiple Vertical CCD (VCCD) clocks and 20+/-5ns for Horizontal CCD (HCCD) clocks with sufficient design margins across full temperature range, with a pixel readout rate of 6.6MHz. The full design has been realized in flexi-rigid PCB with package volume of 140x160x50 mm3.
Implementation of a Synchronized Oscillator Circuit for Fast Sensing and Labeling of Image Objects
Kowalski, Jacek; Strzelecki, Michal; Kim, Hyongsuk
2011-01-01
We present an application-specific integrated circuit (ASIC) CMOS chip that implements a synchronized oscillator cellular neural network with a matrix size of 32 × 32 for object sensing and labeling in binary images. Networks of synchronized oscillators are a recently developed tool for image segmentation and analysis. Its parallel network operation is based on a “temporary correlation” theory that attempts to describe scene recognition as if performed by the human brain. The synchronized oscillations of neuron groups attract a person’s attention if he or she is focused on a coherent stimulus (image object). For more than one perceived stimulus, these synchronized patterns switch in time between different neuron groups, thus forming temporal maps that code several features of the analyzed scene. In this paper, a new oscillator circuit based on a mathematical model is proposed, and the network architecture and chip functional blocks are presented and discussed. The proposed chip is implemented in AMIS 0.35 μm C035M-D 5M/1P technology. An application of the proposed network chip for the segmentation of insulin-producing pancreatic islets in magnetic resonance liver images is presented. PMID:22163803
Design and Implementation of the Control System for a 2 kHz Rotary Fast Tool Servo
Montesanti, R C; Trumper, D L
2004-03-29
This paper presents a summary of the performance of our 2 kHz rotary fast tool servo and an overview of its control systems. We also discuss the loop shaping techniques used to design the power amplifier current control loop and the implementation of that controller in an op-amp circuit. The design and development of the control system involved a long list of items including: current compensation; tool position compensation; notch filter design and phase stabilizing with an additional pole for a plant with an undamped resonance; adding viscous damping to the fast tool servo; voltage budget for driving real and reactive loads; dealing with unwanted oscillators; ground loops; digital-to-analog converter glitches; electrical noise from the spindle motor switching power supply; and filtering the spindle encoder signal to generate smooth tool tip trajectories. Eventually, all of these topics will be discussed in detail in a Ph.D. thesis that will include this work. For the purposes of this paper, rather than present a diluted discussion that attempts to touch on all of these topics, we will focus on the first item with sufficient detail for providing insight into the design process.
NASA Astrophysics Data System (ADS)
Hu, Chialun John
2014-04-01
LPED method, or Local Polar Edge Detection method, is a novel method the author discovered and implemented in many image processing schemes in the last 3 years with 3 papers published in this and other SPIE national conferences. It uses a special real-time boundary extraction method applied to some binary images taken by an uncooled IR camera on some high temperature objects embedded in a cold environment background in the far field. The unique boundary shape of each high temperature object can then be used to construct a 36D analog vector (a 36 - "digit" number U, with each "digit" being a positive analog number of any magnitude). This 36D analog vector U then represents the ID code to identify this object possessing this particular boundary shape. Therefore, U may be used for tracking and targeting on this particular object when this object is moving very fast in a 2D space and criss-crossing with other fast moving objects embedded in the same field of view. The current paper will report a preliminary optical bench design of the optical system that will use the above developed soft-ware to construct a real-time, instant-detect, instant track, and automatic targeting high power laser gun system, for shooting down any spontaneously launched enemy surface-to-air-missiles from the near-by battle ground. It uses the total reflection phenomenon in the Wollastron beam combiner and real-time monitor screen auto-targeting and firing system to implement this "instant-detect, instant-kill, SAM killer system".
Banić, Nikola; Lončarić, Sven
2015-11-01
Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed. PMID:26560928
Demosaicking by alternating projections: theory and fast one-step implementation.
Lu, Yue M; Karzand, Mina; Vetterli, Martin
2010-08-01
Color image demosaicking is a key process in the digital imaging pipeline. In this paper, we study a well-known and influential demosaicking algorithm based upon alternating projections (AP), proposed by Gunturk, Altunbasak and Mersereau in 2002. Since its publication, the AP algorithm has been widely cited and compared against in a series of more recent papers in the demosaicking literature. Despite good performances, a limitation of the AP algorithm is its high computational complexity. We provide three main contributions in this paper. First, we present a rigorous analysis of the convergence property of the AP demosaicking algorithm, showing that it is a contraction mapping, with a unique fixed point. Second, we show that this fixed point is in fact the solution to a constrained quadratic minimization problem, thus, establishing the optimality of the AP algorithm. Finally, using the tool of polyphase representation, we show how to obtain the results of the AP algorithm in a single step, implemented as linear filtering in the polyphase domain. Replacing the original iterative procedure by the proposed one-step solution leads to substantial computational savings, by about an order of magnitude in our experiments. PMID:20236886
Movie approximation technique for the implementation of fast bandwidth-smoothing algorithms
NASA Astrophysics Data System (ADS)
Feng, Wu-chi; Lam, Chi C.; Liu, Ming
1997-12-01
Bandwidth smoothing algorithms can effectively reduce the network resource requirements for the delivery of compressed video streams. For stored video, a large number of bandwidth smoothing algorithms have been introduced that are optimal under certain constraints but require access to all the frame size data in order to achieve their optimal properties. This constraint, however, can be both resource and computationally expensive, especially for moderately priced set-top-boxes. In this paper, we introduce a movie approximation technique for the representation of the frame sizes of a video, reducing the complexity of the bandwidth smoothing algorithms and the amount of frame data that must be transmitted prior to the start of playback. Our results show that the proposed approximation technique can accurately approximate the frame data with a small number of piece-wise linear segments without affecting the performance measures that the bandwidth soothing algorithms are attempting to achieve by more than 1%. In addition, we show that implementations of this technique can speed up execution times by 100 to 400 times, allowing the bandwidth plan calculation times to be reduced to tens of milliseconds. Evaluation using a compressed full-length motion-JPEG video is provided.
NASA Astrophysics Data System (ADS)
Reale, F.; Barbera, M.; Sciortino, S.
1992-11-01
We illustrate a general and straightforward approach to develop FORTRAN parallel two-dimensional data-domain applications on distributed-memory systems, such as those based on transputers. We have aimed at achieving flexibility for different processor topologies and processor numbers, non-homogeneous processor configurations and coarse load-balancing. We have assumed a master-slave architecture as basic programming model in the framework of a domain decomposition approach. After developing a library of high-level general network and communication routines, based on low-level system-dependent libraries, we have used it to parallelize some specific applications: an elementary 2-D code, useful as a pattern and guide for other more complex applications, and a 2-D hydrodynamic code for astrophysical studies. Code parallelization is achieved by splitting the original code into two independent codes, one for the master and the other for the slaves, and then by adding coordinated calls to network setting and message-passing routines into the programs. The parallel applications have been implemented on a Meiko Computing Surface hosted by a SUN 4 workstation and running CSTools software package. After the basic network and communication routines were developed, the task of parallelizing the 2-D hydrodynamic code took approximately 12 man hours. The parallel efficiency of the code ranges between 98% and 58% on arrays between 2 and 20 T800 transputers, on a relatively small computational mesh (≈3000 cells). Arrays consisting of a limited number of faster Intel i860 processors achieve a high parallel efficiency on large computational grids (> 10000 grid points) with performances in the class of minisupercomputers.
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac . PMID:27231982
NASA Astrophysics Data System (ADS)
Valencia, David; Plaza, Antonio; Vega-Rodríguez, Miguel A.; Pérez, Rosa M.
2005-11-01
Hyperspectral imagery is a class of image data which is used in many scientific areas, most notably, medical imaging and remote sensing. It is characterized by a wealth of spatial and spectral information. Over the last years, many algorithms have been developed with the purpose of finding "spectral endmembers," which are assumed to be pure signatures in remotely sensed hyperspectral data sets. Such pure signatures can then be used to estimate the abundance or concentration of materials in mixed pixels, thus allowing sub-pixel analysis which is crucial in many remote sensing applications due to current sensor optics and configuration. One of the most popular endmember extraction algorithms has been the pixel purity index (PPI), available from Kodak's Research Systems ENVI software package. This algorithm is very time consuming, a fact that has generally prevented its exploitation in valid response times in a wide range of applications, including environmental monitoring, military applications or hazard and threat assessment/tracking (including wildland fire detection, oil spill mapping and chemical and biological standoff detection). Field programmable gate arrays (FPGAs) are hardware components with millions of gates. Their reprogrammability and high computational power makes them particularly attractive in remote sensing applications which require a response in near real-time. In this paper, we present an FPGA design for implementation of PPI algorithm which takes advantage of a recently developed fast PPI (FPPI) algorithm that relies on software-based optimization. The proposed FPGA design represents our first step toward the development of a new reconfigurable system for fast, onboard analysis of remotely sensed hyperspectral imagery.
Souris, K; Lee, J; Sterpin, E
2014-06-15
Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time
NASA Technical Reports Server (NTRS)
Farhat, Nabil H.
1987-01-01
Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.
Huang, Christina Y.; Bassett, Mary T.; Silver, Lynn D.
2010-01-01
Objectives. We assessed consumer awareness of menu calorie information at fast-food chains after the introduction of New York City's health code regulation requiring these chains to display food-item calories on menus and menu boards. Methods. At 45 restaurants representing the 15 largest fast-food chains in the city, we conducted cross-sectional surveys 3 months before and 3 months after enforcement began. At both time points, customers were asked if they had seen calorie information and, if so, whether it had affected their purchase. Data were weighted to the number of city locations for each chain. Results. We collected 1188 surveys pre-enforcement and 1229 surveys postenforcement. Before enforcement, 25% of customers reported seeing calorie information; postenforcement, this figure rose to 64% (P < .001; 38% and 72%, weighted). Among customers who saw calorie information postenforcement, 27% said they used the information, which represents a 2-fold increase in the percentage of customers making calorie-informed choices (10% vs 20%, weighted; P < .001). Conclusions. Posting calorie information on menu boards increases the number of people who see and use this information. Since enforcement of New York's calorie labeling regulation began, approximately 1 million New York adults have seen calorie information each day. PMID:20966367
Namba, Alexa; Leonberg, Beth L.; Wootan, Margo G.
2013-01-01
Introduction Since 2008, several states and municipalities have implemented regulations requiring provision of nutrition information at chain restaurants to address obesity. Although early research into the effect of such labels on consumer decisions has shown mixed results, little information exists on the restaurant industry’s response to labeling. The objective of this exploratory study was to evaluate the effect of menu labeling on fast-food menu offerings over 7 years, from 2005 through 2011. Methods Menus from 5 fast-food chains that had outlets in jurisdictions subject to menu-labeling laws (cases) were compared with menus from 4 fast-food chains operating in jurisdictions not requiring labeling (controls). A trend analysis assessed whether case restaurants improved the healthfulness of their menus relative to the control restaurants. Results Although the overall prevalence of “healthier” food options remained low, a noteworthy increase was seen after 2008 in locations with menu-labeling laws relative to those without such laws. Healthier food options increased from 13% to 20% at case locations while remaining static at 8% at control locations (test for difference in the trend, P = .02). Since 2005, the average calories for an à la carte entrée remained moderately high (approximately 450 kilocalories), with less than 25% of all entrées and sides qualifying as healthier and no clear systematic differences in the trend between chain restaurants in case versus control areas (P ≥ .50). Conclusion These findings suggest that menu labeling has thus far not affected the average nutritional content of fast-food menu items, but it may motivate restaurants to increase the availability of healthier options. PMID:23786908
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when
2010-01-01
Background An important problem in genomics is the automatic inference of groups of homologous proteins from pairwise sequence similarities. Several approaches have been proposed for this task which are "local" in the sense that they assign a protein to a cluster based only on the distances between that protein and the other proteins in the set. It was shown recently that global methods such as spectral clustering have better performance on a wide variety of datasets. However, currently available implementations of spectral clustering methods mostly consist of a few loosely coupled Matlab scripts that assume a fair amount of familiarity with Matlab programming and hence they are inaccessible for large parts of the research community. Results SCPS (Spectral Clustering of Protein Sequences) is an efficient and user-friendly implementation of a spectral method for inferring protein families. The method uses only pairwise sequence similarities, and is therefore practical when only sequence information is available. SCPS was tested on difficult sets of proteins whose relationships were extracted from the SCOP database, and its results were extensively compared with those obtained using other popular protein clustering algorithms such as TribeMCL, hierarchical clustering and connected component analysis. We show that SCPS is able to identify many of the family/superfamily relationships correctly and that the quality of the obtained clusters as indicated by their F-scores is consistently better than all the other methods we compared it with. We also demonstrate the scalability of SCPS by clustering the entire SCOP database (14,183 sequences) and the complete genome of the yeast Saccharomyces cerevisiae (6,690 sequences). Conclusions Besides the spectral method, SCPS also implements connected component analysis and hierarchical clustering, it integrates TribeMCL, it provides different cluster quality tools, it can extract human-readable protein descriptions using GI
NASA Technical Reports Server (NTRS)
Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.
2002-01-01
Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.
Aurigemma, Christine; Farrell, William
2010-09-24
Medicinal chemists often depend on analytical instrumentation for reaction monitoring and product confirmation at all stages of pharmaceutical discovery and development. To obtain pure compounds for biological assays, the removal of side products and final compounds through purification is often necessary. Prior to purification, chemists often utilize open-access analytical LC/MS instruments because mass confirmation is fast and reliable, and the chromatographic separation of most sample constituents is sufficient. Supercritical fluid chromatography (SFC) is often used as an orthogonal technique to HPLC or when isolation of the free base of a compound is desired. In laboratories where SFC is the predominant technique for analysis and purification of compounds, a reasonable approach for quickly determining suitable purification conditions is to screen the sample against different columns. This can be a bottleneck to the purification process. To commission SFC for open-access use, a walk-up analytical SFC/MS screening system was implemented in the medicinal chemistry laboratory. Each sample is automatically screened through six column/method conditions, and on-demand data processing occurs for the chromatographers after each screening method is complete. This paper highlights the "FastTrack" approach to expediting samples through purification. PMID:20728893
Grey Ballard, Austin Benson
2014-11-26
This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.
Energy Science and Technology Software Center (ESTSC)
2014-11-26
This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fastmore » matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.« less
O'Brien, Travis A.; Kashinath, Karthik
2015-05-22
This software implements the fast, self-consistent probability density estimation described by O'Brien et al. (2014, doi: ). It uses a non-uniform fast Fourier transform technique to reduce the computational cost of an objective and self-consistent kernel density estimation method.
TNAMD: Implementation of TIGER2 in NAMD
NASA Astrophysics Data System (ADS)
Menz, William J.; Penna, Matthew J.; Biggs, Mark J.
2010-12-01
Replica-exchange molecular dynamics (REMD) must be used to enhance sampling when there are significant (relative to kT) barriers between different parts of the phase space. TIGER2 is a next-generation REMD method that offers more efficient sampling compared to the original REMD method by reducing the number of replicas required to span a given temperature range. In this paper, we present an implementation of the TIGER2 algorithm in the NAMD software package. This implementation exploits the capacity of NAMD to interpret Tcl scripts. The Tcl script implementing TIGER2 is provided and explained in detail, and demonstrated for alanine dipeptide in water. Program summaryProgram title: TNAMD Catalogue identifier: AEHH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 837 No. of bytes in distributed program, including test data, etc.: 610 107 Distribution format: tar.gz Programming language: Tcl 8.5 Computer: Clusters, workstations. Tested on Intel Clovertown, Atom. Operating system: Linux, Windows (with Linux shell) Has the code been vectorised or parallelised?: Yes. Parallelised with MPI Classification: 3 External routines: NAMD 2.6 ( http://www.ks.uiuc.edu/Research/namd/) Nature of problem: Replica-exchange molecular dynamics. Solution method: Replicas are assigned to unique processes, and pass through a series of cycles of heating, equilibration, quenching and sampling; after which temperatures are swapped between replicas. Running time: The test run can take up to twenty minutes.
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.
2011-01-01
The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.
Dutta, Abhijit; Schaidle, Joshua A.; Humbird, David; Baddour, Frederick G.; Sahir, Asad
2015-10-06
Ex situ catalytic fast pyrolysis of biomass is a promising route for the production of fungible liquid biofuels. There is significant ongoing research on the design and development of catalysts for this process. However, there are a limited number of studies investigating process configurations and their effects on biorefinery economics. Herein we present a conceptual process design with techno-economic assessment; it includes the production of upgraded bio-oil via fixed bed ex situ catalytic fast pyrolysis followed by final hydroprocessing to hydrocarbon fuel blendstocks. This study builds upon previous work using fluidized bed systems, as detailed in a recent design report led by the National Renewable Energy Laboratory (NREL/TP-5100-62455); overall yields are assumed to be similar, and are based on enabling future feasibility. Assuming similar yields provides a basis for easy comparison and for studying the impacts of areas of focus in this study, namely, fixed bed reactor configurations and their catalyst development requirements, and the impacts of an inline hot gas filter. A comparison with the fluidized bed system shows that there is potential for higher capital costs and lower catalyst costs in the fixed bed system, leading to comparable overall costs. The key catalyst requirement is to enable the effective transformation of highly oxygenated biomass into hydrocarbons products with properties suitable for blending into current fuels. Potential catalyst materials are discussed, along with their suitability for deoxygenation, hydrogenation and C–C coupling chemistry. This chemistry is necessary during pyrolysis vapor upgrading for improved bio-oil quality, which enables efficient downstream hydroprocessing; C–C coupling helps increase the proportion of diesel/jet fuel range product. One potential benefit of fixed bed upgrading over fluidized bed upgrading is catalyst flexibility, providing greater control over chemistry and product composition. Since this
Dutta, Abhijit; Schaidle, Joshua A.; Humbird, David; Baddour, Frederick G.; Sahir, Asad
2015-10-06
Ex situ catalytic fast pyrolysis of biomass is a promising route for the production of fungible liquid biofuels. There is significant ongoing research on the design and development of catalysts for this process. However, there are a limited number of studies investigating process configurations and their effects on biorefinery economics. Herein we present a conceptual process design with techno-economic assessment; it includes the production of upgraded bio-oil via fixed bed ex situ catalytic fast pyrolysis followed by final hydroprocessing to hydrocarbon fuel blendstocks. This study builds upon previous work using fluidized bed systems, as detailed in a recent design reportmore » led by the National Renewable Energy Laboratory (NREL/TP-5100-62455); overall yields are assumed to be similar, and are based on enabling future feasibility. Assuming similar yields provides a basis for easy comparison and for studying the impacts of areas of focus in this study, namely, fixed bed reactor configurations and their catalyst development requirements, and the impacts of an inline hot gas filter. A comparison with the fluidized bed system shows that there is potential for higher capital costs and lower catalyst costs in the fixed bed system, leading to comparable overall costs. The key catalyst requirement is to enable the effective transformation of highly oxygenated biomass into hydrocarbons products with properties suitable for blending into current fuels. Potential catalyst materials are discussed, along with their suitability for deoxygenation, hydrogenation and C–C coupling chemistry. This chemistry is necessary during pyrolysis vapor upgrading for improved bio-oil quality, which enables efficient downstream hydroprocessing; C–C coupling helps increase the proportion of diesel/jet fuel range product. One potential benefit of fixed bed upgrading over fluidized bed upgrading is catalyst flexibility, providing greater control over chemistry and product composition
Fast Reactor Technology Preservation
Wootan, David W.; Omberg, Ronald P.
2008-01-11
There is renewed worldwide interest in developing and implementing a new generation of advanced fast reactors. International cooperative efforts are underway such as the Global Nuclear Energy Partnership (GNEP). Advanced computer modeling and simulation efforts are a key part of these programs. A recognized and validated set of Benchmark Cases are an essential component of such modeling efforts. Testing documentation developed during the operation of the Fast Flux Test Facility (FFTF) provide the information necessary to develop a very useful set of Benchmark Cases.
Li, Shengtai; Li, Hui
2012-06-14
We develop a 3D simulation code for interaction between the proto-planetary disk and embedded proto-planets. The protoplanetary disk is treated as a three-dimensional (3D), self-gravitating gas whose motion is described by the locally isothermal Navier-Stokes equations in a spherical coordinate centered on the star. The differential equations for the disk are similar to those given in Kley et al. (2009) with a different gravitational potential that is defined in Nelson et al. (2000). The equations are solved by directional split Godunov method for the inviscid Euler equations plus operator-split method for the viscous source terms. We use a sub-cycling technique for the azimuthal sweep to alleviate the time step restriction. We also extend the FARGO scheme of Masset (2000) and modified in Li et al. (2001) to our 3D code to accelerate the transport in the azimuthal direction. Furthermore, we have implemented a reduced 2D (r, {theta}) and a fully 3D self-gravity solver on our uniform disk grid, which extends our 2D method (Li, Buoni, & Li 2008) to 3D. This solver uses a mode cut-off strategy and combines FFT in the azimuthal direction and direct summation in the radial and meridional direction. An initial axis-symmetric equilibrium disk is generated via iteration between the disk density profile and the 2D disk-self-gravity. We do not need any softening in the disk self-gravity calculation as we have used a shifted grid method (Li et al. 2008) to calculate the potential. The motion of the planet is limited on the mid-plane and the equations are the same as given in D'Angelo et al. (2005), which we adapted to the polar coordinates with a fourth-order Runge-Kutta solver. The disk gravitational force on the planet is assumed to evolve linearly with time between two hydrodynamics time steps. The Planetary potential acting on the disk is calculated accurately with a small softening given by a cubic-spline form (Kley et al. 2009). Since the torque is extremely sensitive to
Van Dyke, W.J.
1992-04-07
A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.
Van Dyke, William J.
1992-01-01
A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.
ERIC Educational Resources Information Center
Essexville-Hampton Public Schools, MI.
Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…
LIDIA, S.M.; LUND, S.M.; SEIDL, P.A.
2010-01-04
This milestone has been accomplished. The Heavy Ion Fusion Science Virtual National Laboratory has completed simulations of a fast correction scheme to compensate for chromatic and time-dependent defocusing effects in the transport of ion beams to the target plane in the NDCX-1 facility. Physics specifications for implementation in NDCX-1 and NDCX-2 have been established. This milestone has been accomplished. The Heavy Ion Fusion Science Virtual National Laboratory has completed simulations of a fast correction scheme to compensate for chromatic and time-dependent defocusing effects in the transport of ion beams to the target plane in the NDCX-1 facility. Physics specifications for implementation in NDCX-1 and NDCX-2 have been established. Focal spot differences at the target plane between the compressed and uncompressed regions of the beam pulse have been modeled and measured on NDCX-1. Time-dependent focusing and energy sweep from the induction bunching module are seen to increase the compressed pulse spot size at the target plane by factors of two or more, with corresponding scaled reduction in the peak intensity and fluence on target. A time-varying beam envelope correction lens has been suggested to remove the time-varying aberration. An Einzel (axisymmetric electric) lens system has been analyzed and optimized for general transport lines, and as a candidate correction element for NDCX-1. Attainable high-voltage holdoff and temporal variations of the lens driving waveform are seen to effect significant changes on the beam envelope angle over the duration of interest, thus confirming the utility of such an element on NDCX-1. Modeling of the beam dynamics in NDCX-1 was performed using a time-dependent (slice) envelope code and with the 3-D, self-consistent, particle-in-cell code WARP. Proof of concept was established with the slice envelope model such that the spread in beam waist positions relative to the target plane can be minimized with a carefully designed
NASA Astrophysics Data System (ADS)
Ghosh, Sanjay; Chaudhury, Kunal N.
2016-03-01
We propose a simple and fast algorithm called PatchLift for computing distances between patches (contiguous block of samples) extracted from a given one-dimensional signal. PatchLift is based on the observation that the patch distances can be efficiently computed from a matrix that is derived from the one-dimensional signal using lifting; importantly, the number of operations required to compute the patch distances using this approach does not scale with the patch length. We next demonstrate how PatchLift can be used for patch-based denoising of images corrupted with Gaussian noise. In particular, we propose a separable formulation of the classical nonlocal means (NLM) algorithm that can be implemented using PatchLift. We demonstrate that the PatchLift-based implementation of separable NLM is a few orders faster than standard NLM and is competitive with existing fast implementations of NLM. Moreover, its denoising performance is shown to be consistently superior to that of NLM and some of its variants, both in terms of peak signal-to-noise ratio/structural similarity index and visual quality.
NASA Technical Reports Server (NTRS)
Bishop, Matt
1988-01-01
The organization of some tools to help improve passwork security at a UNIX-based site is described along with how to install and use them. These tools and their associated library enable a site to force users to pick reasonably safe passwords (safe being site configurable) and to enable site management to try to crack existing passworks. The library contains various versions of a very fast implementation of the Data Encryption Standard and of the one-way encryption functions used to encryp the password.
Fast-Track Teacher Recruitment.
ERIC Educational Resources Information Center
Grant, Franklin Dean
2001-01-01
Schools need a Renaissance human-resources director to implement strategic staffing and fast-track teacher-recruitment plans. The HR director must attend to customer satisfaction, candidate supply, web-based recruitment possibilities, stabilization of newly hired staff, retention of veteran staff, utilization of retired employees, and latest…
A fast neighbor joining method.
Li, J F
2015-01-01
With the rapid development of sequencing technologies, an increasing number of sequences are available for evolutionary tree reconstruction. Although neighbor joining is regarded as the most popular and fastest evolutionary tree reconstruction method [its time complexity is O(n(3)), where n is the number of sequences], it is not sufficiently fast to infer evolutionary trees containing more than a few hundred sequences. To increase the speed of neighbor joining, we herein propose FastNJ, a fast implementation of neighbor joining, which was motivated by RNJ and FastJoin, two improved versions of conventional neighbor joining. The main difference between FastNJ and conventional neighbor joining is that, in the former, many pairs of nodes selected by the rule used in RNJ are joined in each iteration. In theory, the time complexity of FastNJ can reach O(n(2)) in the best cases. Experimental results show that FastNJ yields a significant increase in speed compared to RNJ and conventional neighbor joining with a minimal loss of accuracy. PMID:26345805
FAST: FAST Analysis of Sequences Toolbox.
Lawrence, Travis J; Kauffman, Kyle T; Amrine, Katherine C H; Carper, Dana L; Lee, Raymond S; Becich, Peter J; Canales, Claudia J; Ardell, David H
2015-01-01
FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145
FAST: FAST Analysis of Sequences Toolbox
Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.
2015-01-01
FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145
NASA Astrophysics Data System (ADS)
Steward, Richard M.
The design and implementation of the object-oriented fast simulation program Atlfast is described for the ATLAS experiment at the CERN particle physics laboratory in Switzerland. Fast simulations use parametrised energy and momentum smearing in order to recreate the detection efficiency and particle identification of a real experimental detector, without the time-consuming computation required for full detector simulation. Additionally, an object-oriented program for performing user-defined physics analyses is described. This program is released for general use by the ATLAS collaboration and is designed for use with, but not restricted to, physics output from the Atlfast fast simulation program. These programs are demonstrated in a physics study of the feasibility of discovering the Higgs boson at the ATLAS experiment, using the discovery channel ho > Z Z * > bb l+l via weak vector boson fusion in the mass range 150 GeV - 200 GeV. It is found that this channel does not significantly increase the discovery potential over this mass range, achieving the observa tion threshold of 3a only after 3 years high luminosity running at the upper end of the mass range.
Fernández-Carrión, E; Ivorra, B; Martínez-López, B; Ramos, A M; Sánchez-Vizcaíno, J M
2016-04-01
Be-FAST is a computer program based on a time-spatial stochastic spread mathematical model for studying the transmission of infectious livestock diseases within and between farms. The present work describes a new module integrated into Be-FAST to model the economic consequences of the spreading of classical swine fever (CSF) and other infectious livestock diseases within and between farms. CSF is financially one of the most damaging diseases in the swine industry worldwide. Specifically in Spain, the economic costs in the two last CSF epidemics (1997 and 2001) reached jointly more than 108 million euros. The present analysis suggests that severe CSF epidemics are associated with significant economic costs, approximately 80% of which are related to animal culling. Direct costs associated with control measures are strongly associated with the number of infected farms, while indirect costs are more strongly associated with epidemic duration. The economic model has been validated with economic information around the last outbreaks in Spain. These results suggest that our economic module may be useful for analysing and predicting economic consequences of livestock disease epidemics. PMID:26875754
Fast and practical parallel polynomial interpolation
Egecioglu, O.; Gallopoulos, E.; Koc, C.K.
1987-01-01
We present fast and practical parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms make use of fast parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. For n + 1 given input pairs the proposed interpolation algorithm requires 2 (log (n + 1)) + 2 parallel arithmetic steps and circuit size O(n/sup 2/). The algorithms are numerically stable and their floating-point implementation results in error accumulation similar to that of the widely used serial algorithms. This is in contrast to other fast serial and parallel interpolation algorithms which are subject to much larger roundoff. We demonstrate that in a distributed memory environment context, a cube connected system is very suitable for the algorithms' implementation, exhibiting very small communication cost. As further advantages we note that our techniques do not require equidistant points, preconditioning, or use of the Fast Fourier Transform. 21 refs., 4 figs.
NASA Technical Reports Server (NTRS)
Steele, G. L., Jr.
1977-01-01
MacLISP provides a compiler which produces numerical code competitive in speed with some FORTRAN implementations and yet compatible with the rest of the MacLISP system. All numerical programs can be run under the MacLISP interpreter. Additional declarations to the compiler specify type information which allows the generation of optimized numerical code which generally does not require the garbage collection of temporary numerical results. Array accesses are almost as fast as in FORTRAN, and permit the use of dynamically allocated arrays of varying dimensions. The implementation decisions regarding user interface, data representations, and interfacing conventions are discussed which allow the generation of fast numerical LISP code.
Enhanced Model for Fast Ignition
Mason, Rodney J.
2010-10-12
Laser Fusion is a prime candidate for alternate energy production, capable of serving a major portion of the nation's energy needs, once fusion fuel can be readily ignited. Fast Ignition may well speed achievement of this goal, by reducing net demands on laser pulse energy and timing precision. However, Fast Ignition has presented a major challenge to modeling. This project has enhanced the computer code ePLAS for the simulation of the many specialized phenomena, which arise with Fast Ignition. The improved code has helped researchers to understand better the consequences of laser absorption, energy transport, and laser target hydrodynamics. ePLAS uses efficient implicit methods to acquire solutions for the electromagnetic fields that govern the accelerations of electrons and ions in targets. In many cases, the code implements fluid modeling for these components. These combined features, "implicitness and fluid modeling," can greatly facilitate calculations, permitting the rapid scoping and evaluation of experiments. ePLAS can be used on PCs, Macs and Linux machines, providing researchers and students with rapid results. This project has improved the treatment of electromagnetics, hydrodynamics, and atomic physics in the code. It has simplified output graphics, and provided new input that avoids the need for source code access by users. The improved code can now aid university, business and national laboratory users in pursuit of an early path to success with Fast Ignition.
Fast foods are quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated fat, ...
... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ...
... this page: //medlineplus.gov/ency/article/003766.htm Acid-fast stain To use the sharing features on this page, please enable JavaScript. The acid-fast stain is a laboratory test that determines ...
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
Garber, Andrea K; Lustig, Robert H
2011-09-01
Studies of food addiction have focused on highly palatable foods. While fast food falls squarely into that category, it has several other attributes that may increase its salience. This review examines whether the nutrients present in fast food, the characteristics of fast food consumers or the presentation and packaging of fast food may encourage substance dependence, as defined by the American Psychiatric Association. The majority of fast food meals are accompanied by a soda, which increases the sugar content 10-fold. Sugar addiction, including tolerance and withdrawal, has been demonstrated in rodents but not humans. Caffeine is a "model" substance of dependence; coffee drinks are driving the recent increase in fast food sales. Limited evidence suggests that the high fat and salt content of fast food may increase addictive potential. Fast food restaurants cluster in poorer neighborhoods and obese adults eat more fast food than those who are normal weight. Obesity is characterized by resistance to insulin, leptin and other hormonal signals that would normally control appetite and limit reward. Neuroimaging studies in obese subjects provide evidence of altered reward and tolerance. Once obese, many individuals meet criteria for psychological dependence. Stress and dieting may sensitize an individual to reward. Finally, fast food advertisements, restaurants and menus all provide environmental cues that may trigger addictive overeating. While the concept of fast food addiction remains to be proven, these findings support the role of fast food as a potentially addictive substance that is most likely to create dependence in vulnerable populations. PMID:21999689
Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan
2010-01-01
We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system
Fast Food Occupations. Coordinator's Guide. First Edition.
ERIC Educational Resources Information Center
Hohhertz, Durwin
This coordinator's guide consists of materials for use in implementing four individualized units that have been developed for students enrolled in cooperative part-time training and are employed in fast food restaurants. Addressed in the individual units are the following occupations: cashier (DOT No. 211.462-010), counter attendant (DOT No.…
Demiris, George
2014-01-01
This article provides a general introduction to implementation science—the discipline that studies the implementation process of research evidence—in the context of hospice and palliative care. By discussing how implementation science principles and frameworks can inform the design and implementation of intervention research, we aim to highlight how this approach can maximize the likelihood for translation and long-term adoption in clinical practice settings. We present 2 ongoing clinical trials in hospice that incorporate considerations for translation in their design and implementation as case studies for the implications of implementation science. This domain helps us better understand why established programs may lose their effectiveness over time or when transferred to other settings, why well-tested programs may exhibit unintended effects when introduced in new settings, or how an intervention can maximize cost-effectiveness with strategies for effective adoption. All these challenges are of significance to hospice and palliative care, where we seek to provide effective and efficient tools to improve care services. The emergence of this discipline calls for researchers and practitioners to carefully examine how to refine current and design new and innovative strategies to improve quality of care. PMID:23558847
... quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated fat, ...
Gavrankapetanović, F
1997-01-01
Fasting (arabic-savm) was proclaimed through islam, and thus it is an obligation for Holly Prophet Muhammad s.a.v.s.-Peace be to Him-in the second year after Hijra (in 624 after Milad-born of Isa a.s.). There is a month of fasting-Ramadan-each lunar (hijra) year. So, it was 1415th fasting this year. Former Prophets have brought obligative messages on fasting to their people; so there are also certain forms of fasting with other religions i.e. with Catholics, Jews, Orthodox. These kinds of fasting above differ from muslim fasting, but they also appear obligative. All revelations have brought fasting as obligative. From medical point of view, fasting has two basical components: psychical and physical. Psychical sphere correlate closely with its fundamental ideological message. Allah dz.s. says in Quran: "... Fasting is obligative for you, as it was obligative to your precedents, as to avoid sins; during very few days (II, II, 183 & 184)." Will strength, control of passions, effort and self-discipline makes a pure faithfull person, who purify its mind and body through fasting. Thinking about The Creator is more intensive, character is more solid; and spirit and will get stronger. We will mention the hadith saying: "Essaihune humus saimun!" That means: "Travellers at the Earth are fasters (of my ummet)." The commentary of this hadith, in the Collection of 1001 hadiths (Bin bir hadis), number 485, says: "There are no travelling dervishs or monks in islam; thus there is no such a kind of relligousity in islam. In stead, it is changed by fasting and constant attending of mosque. That was proclaimed as obligation, although there were few cases of travelling in the name of relligousity, like travelling dervishs and sheichs." In this paper, the author discusses medical aspects of fasting and its positive characteristics in the respect of healthy life style and prevention of many sicks. The author mentions positive influence of fasting to certain system and organs of human
Integrative Physiology of Fasting.
Secor, Stephen M; Carey, Hannah V
2016-04-01
Extended bouts of fasting are ingrained in the ecology of many organisms, characterizing aspects of reproduction, development, hibernation, estivation, migration, and infrequent feeding habits. The challenge of long fasting episodes is the need to maintain physiological homeostasis while relying solely on endogenous resources. To meet that challenge, animals utilize an integrated repertoire of behavioral, physiological, and biochemical responses that reduce metabolic rates, maintain tissue structure and function, and thus enhance survival. We have synthesized in this review the integrative physiological, morphological, and biochemical responses, and their stages, that characterize natural fasting bouts. Underlying the capacity to survive extended fasts are behaviors and mechanisms that reduce metabolic expenditure and shift the dependency to lipid utilization. Hormonal regulation and immune capacity are altered by fasting; hormones that trigger digestion, elevate metabolism, and support immune performance become depressed, whereas hormones that enhance the utilization of endogenous substrates are elevated. The negative energy budget that accompanies fasting leads to the loss of body mass as fat stores are depleted and tissues undergo atrophy (i.e., loss of mass). Absolute rates of body mass loss scale allometrically among vertebrates. Tissues and organs vary in the degree of atrophy and downregulation of function, depending on the degree to which they are used during the fast. Fasting affects the population dynamics and activities of the gut microbiota, an interplay that impacts the host's fasting biology. Fasting-induced gene expression programs underlie the broad spectrum of integrated physiological mechanisms responsible for an animal's ability to survive long episodes of natural fasting. PMID:27065168
Nanavati, Aditya J; Nagral, Sanjay; Prabhakar, Subramaniam
2014-01-01
Fast-track surgery or 'enhanced recovery after surgery' or 'multimodal rehabilitation after surgery' is a form of protocol-based perioperative care programme. It is an amalgamation of evidence-based practices that have been proven to improve patient outcome independently and exert a synergistic effect when applied together. The philosophy is to treat the patient's pathology with minimal disturbance to the physiology. Several surgical subspecialties have now adopted such protocols with good results. The role of fast-track surgery in colorectal procedures has been well demonstrated. Its application to other major abdominal surgical procedures is not as well defined but there are encouraging results in the few studies conducted. There has been resistance to several aspects of this programme among gastrointestinal and general surgeons. There is little data from India in the available literature on the application of fast-tracking in gastrointestinal surgery. In a country such as India the existing healthcare structure stands to gain the most by widespread adoption of fast-track methods. Early discharge, early ambulation, earlier return to work and increased hospital efficiency are some of the benefits. The cost gains derived from this programme stand to benefit the patient, doctor and government as well. The practice and implementation of fast-track surgery involves a multidisciplinary team approach. It requires policy formation at an institutional level and interdepartmental coordination. More research is required in areas like implementation of such protocols across India to derive the maximum benefit from them. PMID:25471759
Gelman, Hannah; Gruebele, Martin
2014-01-01
Fast folding proteins have been a major focus of computational and experimental study because they are accessible to both techniques: they are small and fast enough to be reasonably simulated with current computational power, but have dynamics slow enough to be observed with specially developed experimental techniques. This coupled study of fast folding proteins has provided insight into the mechanisms which allow some proteins to find their native conformation well less than 1 ms and has uncovered examples of theoretically predicted phenomena such as downhill folding. The study of fast folders also informs our understanding of even “slow” folding processes: fast folders are small, relatively simple protein domains and the principles that govern their folding also govern the folding of more complex systems. This review summarizes the major theoretical and experimental techniques used to study fast folding proteins and provides an overview of the major findings of fast folding research. Finally, we examine the themes that have emerged from studying fast folders and briefly summarize their application to protein folding in general as well as some work that is left to do. PMID:24641816
Trueland, Jennifer
2013-12-18
The 5.2 diet involves two days of fasting each week. It is being promoted as the key to sustained weight loss, as well as wider health benefits, despite the lack of evidence on the long-term effects. Nurses need to support patients who wish to try intermittent fasting. PMID:24345130
NASA Astrophysics Data System (ADS)
Zhang, Haiyan; Nan, Rendong; Gan, Hengqian; Yue, Youling; Wu, Mingchang; Zhang, Zhiwei; Jin, Chengjin; Peng, Bo
2015-08-01
Five-hundred-meter Aperture Spherical radio Telescope (FAST) is a Chinese mega-science project to build the largest single dish radio telescope in the world. The construction was officially commenced in March 2011. The first light of FAST is expected in 2016. Due to the high sensitivity of FAST, Radio Frequency Interference (RFI) mitigation for the telescope is required to assure the realization of the scientific goals. In order to protect the radio environment of FAST site, the local government has established a radio quiet zone with 30 km radius. Moreover, Electromagnetic Compatibility (EMC) designs and measurements for FAST have also been carried out, and some examples, such as EMC designs for actuator and focus cabin, have been introduced briefly.
The Clean Air Act requires EPA to set National Ambient Air Quality Standards (NAAQS) for six criteria pollutants (lead, carbon monoxide, sulfur dioxide, nitrous oxides, ozone, and particulate matter). After setting NAAQS, there are several activities required to implement the st...
NASA Technical Reports Server (NTRS)
Murphy, Gerald
2003-01-01
The point of the implementation review is to prevent problems from occurring later by trying to get our arms around the planning from the start. fmplementation reviews set the tone for management of the project. They establish a teaming relationship (if they are run properly), and they level the playing field instead of setting up turf wars.
Implementation of conditional simulation by successive residuals
NASA Astrophysics Data System (ADS)
Jewbali, Arja; Dimitrakopoulos, Roussos
2011-02-01
Conditional simulation of ergodic and stationary Gaussian random fields using successive residuals is a new approach used to overcome the size limitations of the LU decomposition algorithm as well as provide fast updating of existing simulated realizations with new data. This paper discusses two different implementations of this approach. The implementations differ in the use of the new information available; in the first implementation new information is partially used to generate updated realizations; however, in the second implementation, the realizations are updated using all the new information available. The implementations are validated using the Walker Lake data set, and compared through a case study at a stockwork gold deposit.
Van Devender, J.P.; Emin, D.
1983-12-21
A reusable fast opening switch for transferring energy, in the form of a high power pulse, from an electromagnetic storage device such as an inductor into a load. The switch is efficient, compact, fast and reusable. The switch comprises a ferromagnetic semiconductor which undergoes a fast transition between conductive and metallic states at a critical temperature and which undergoes the transition without a phase change in its crystal structure. A semiconductor such as europium rich europhous oxide, which undergoes a conductor to insulator transition when it is joule heated from its conductor state, can be used to form the switch.
Till, C.E.; Chang, Y.I.; Kittel, J.H.; Fauske, H.K.; Lineberry, M.J.; Stevenson, M.G.; Amundson, P.I.; Dance, K.D.
1980-07-01
This report is a compilation of Fast Breeder Reactor (FBR) resource documents prepared to provide the technical basis for the US contribution to the International Nuclear Fuel Cycle Evaluation. The eight separate parts deal with the alternative fast breeder reactor fuel cycles in terms of energy demand, resource base, technical potential and current status, safety, proliferation resistance, deployment, and nuclear safeguards. An Annex compares the cost of decommissioning light-water and fast breeder reactors. Separate abstracts are included for each of the parts.
Fasting and cognitive function.
Pollitt, E; Lewis, N L; Garza, C; Shulman, R J
The effects of short-term fasting (skipping breakfast) on the problem-solving performance of 9 to 11 yr old children were studied under the controlled conditions of a metabolic ward. The behavioral test battery included an assessment of IQ, the Matching Familiar Figure Test and Hagen Central Incidental Test. Glucose and insulin levels were measured in blood. All assessments were made under fasting and non-fasting conditions. Skipping breakfast was found to have adverse effects on the children's late morning problem-solving performance. These findings support observations that the timing and nutrient composition of meals have acute and demonstrable effects on behavior. PMID:6764933
J.A. Schmidt
2002-02-20
If a fusion DEMO reactor can be brought into operation during the first half of this century, fusion power production can have a significant impact on carbon dioxide production during the latter half of the century. An assessment of fusion implementation scenarios shows that the resource demands and waste production associated with these scenarios are manageable factors. If fusion is implemented during the latter half of this century it will be one element of a portfolio of (hopefully) carbon dioxide limiting sources of electrical power. It is time to assess the regional implications of fusion power implementation. An important attribute of fusion power is the wide range of possible regions of the country, or countries in the world, where power plants can be located. Unlike most renewable energy options, fusion energy will function within a local distribution system and not require costly, and difficult, long distance transmission systems. For example, the East Coast of the United States is a prime candidate for fusion power deployment by virtue of its distance from renewable energy sources. As fossil fuels become less and less available as an energy option, the transmission of energy across bodies of water will become very expensive. On a global scale, fusion power will be particularly attractive for regions separated from sources of renewable energy by oceans.
Education For All (EFA) - Fast Track Initiative Progress Report 30046
ERIC Educational Resources Information Center
World Bank Education Advisory Service, 2004
2004-01-01
Launched in June 2002, the Education For All-Fast Track Initiative (FTI) is a performance-based program focusing on the implementation of sustainable policies in support of universal primary completion (UPC) and the required resource mobilization. During its twenty months of implementation, FTI has delivered on results, which give reason for…
The acid-fast stain is a laboratory test that determines if a sample of tissue, blood, or other body ... dye. The slide is then washed with an acid solution and a different stain is applied. Bacteria ...
FAST (Faceted Application of Subject Terminology) Users: Summary and Case Studies
ERIC Educational Resources Information Center
Mixter, Jeffrey; Childress, Eric R.
2013-01-01
Over the past ten years, various organizations, both public and private, have expressed interest in implementing the Faceted Application of Subject Terminology (FAST) in their cataloging workflows. As interest in FAST has grown, so too has interest in knowing how FAST is being used and by whom. Since 2002 eighteen institutions in six countries…
NASA Astrophysics Data System (ADS)
Wilkinson, P.
2016-02-01
FAST offers "transformational" performance well-suited to finding new phenomena - one of which might be polarised spectral transients. But discoveries will only be made if "the system" provides its users with the necessary opportunities. In addition to designing in as much observational flexibility as possible, FAST should be operated with a philosophy which maximises its "human bandwidth". This band includes the astronomers of tomorrow - many of whom not have yet started school or even been born.
A Fast Implementation of the ISODATA Clustering Algorithm
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline
2005-01-01
Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.
Fast Implementation of Matched Filter Based Automatic Alignment Image Processing
Awwal, A S; Rice, K; Taha, T
2008-04-02
Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.
Towards a fast implementation of spectral nested dissection
NASA Technical Reports Server (NTRS)
Pothen, Alex; Simon, Horst D.; Wang, Lie; Barnard, Stephen T.
1992-01-01
We describe the spectral nested dissection (SND) algorithm, a new algorithm for computing orderings appropriate for parallel factorization of sparse, symmetric matrices. The algorithm makes use of spectral properties of the Laplacian matrix associated with the given matrix to compute separators. We evaluate the quality of the spectral orderings with respect to several measures: fill, elimination tree height, height and weight balances of elimination trees, and clique tree heights. We use some very large structural analysis problems as test cases and demonstrate on these real applications (such as the Space Shuttle Solid Rocket Booster) that spectral orderings compare quite favorably with commonly used orderings, outperforming them by a wide margin for some of these measures. The only disadvantage of SND is its relatively long execution time. We will present some recent efforts to improve the execution time using both a multilevel and a hybrid approach. We use SND in computing a multifrontal numerical factorization with the different orderings on an eight processor Cray Y-MP and show its effectiveness. We believe that spectral nested dissection is a major breakthrough in terms of generating efficient sparse orderings for parallel machines.
Fast data parallel polygon rendering
Ortega, F.A.; Hansen, C.D.
1993-09-01
This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.
Fast track management and control
Cameron, M.D.O.
1996-12-31
This paper, one of a group of papers describing the development of BP`s West of Shetland Foinaven field will set-out the challenges experienced in managing a fast-track project from system design through to offshore installation. ABB Seatec Limited (formerly GEC Marconi Oil and Gas) were commissioned to provide a Multiplexed Electro-Hydraulic Subsea Control System designed for deepwater and for installation/retrieval in a hostile environment. The paper will address the projects critical phase, the project controls implemented, the practical working methods used within a Subsea Alliance and those involved in Client Interaction, Concurrent Engineering, Team Coaching, Internal Procedures and Interface Management in order to meet the exacting schedule for First Oil deliveries. The Project is currently proceeding on routine production deliveries to complete the field development requirements.
On fast reactor kinetics studies
Seleznev, E. F.; Belov, A. A.; Matveenko, I. P.; Zhukov, A. M.; Raskach, K. F.
2012-07-01
The results and the program of fast reactor core time and space kinetics experiments performed and planned to be performed at the IPPE critical facility is presented. The TIMER code was taken as computation support of the experimental work, which allows transient equations to be solved in 3-D geometry with multi-group diffusion approximation. The number of delayed neutron groups varies from 6 to 8. The code implements the solution of both transient neutron transfer problems: a direct one, where neutron flux density and its derivatives, such as reactor power, etc, are determined at each time step, and an inverse one for the point kinetics equation form, where such a parameter as reactivity is determined with a well-known reactor power time variation function. (authors)
FAST MOLECULAR SOLVATION ENERGETICS AND FORCE COMPUTATION.
Bajaj, Chandrajit; Zhao, Wenqi
2010-01-20
The total free energy of a molecule includes the classical molecular mechanical energy (which is understood as the free energy in vacuum) and the solvation energy which is caused by the change of the environment of the molecule (solute) from vacuum to solvent. The solvation energy is important to the study of the inter-molecular interactions. In this paper we develop a fast surface-based generalized Born method to compute the electrostatic solvation energy along with the energy derivatives for the solvation forces. The most time-consuming computation is the evaluation of the surface integrals over an algebraic spline molecular surface (ASMS) and the fast computation is achieved by the use of the nonequispaced fast Fourier transform (NFFT) algorithm. The main results of this paper involve (a) an efficient sampling of quadrature points over the molecular surface by using nonlinear patches, (b) fast linear time estimation of energy and inter-molecular forces, (c) error analysis, and (d) efficient implementation combining fast pairwise summation and the continuum integration using nonlinear patches. PMID:20200598
Wilson, R.E.; Freeman, L.N.; Walker, S.N.
1995-09-01
The FAST2 Code which is capable of determining structural loads of a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data at two wind speeds for the ESI-80 are given. The FAST2 Code models a two-bladed HAWT with degrees of freedom for blade flap, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffness, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms and azimuth averaged bin plots. It is concluded that agreement between the FAST2 Code and test results is good.
fastSIM: a practical implementation of fast structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer
2015-03-01
A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5 × 16.5 µm2, free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.
Factored-matrix representation of distributed fast transforms. Master's thesis
Bainbridge, R.L.
1987-03-01
Parallel implementations of Fast Fourier Transforms (FFTs) and other fast transforms are represented using factored, partitioned matrices. The factored matrix description of a distributed FFT is introduced using a decimation in time (DIT) FFT algorithm suitable for implementation on a distributed-signal processor. The heart of the matrix representation of distributed fast transforms is the use of permutations of an NxN identity matrix to describe the required interprocessor data transfers on the Butterfly Network. The properties of these transfer matrices and the resulting output ordering are discussed in detail. The factored matrix representation is then used to show that the Fast Hartley Transform (FHT) and the Walsh Hadamard Transform (WHT) are supported by the Butterfly Network.
Till, C.E.; Chang, Y.I.
1986-01-01
The Integral Fast Reactor (IFR) is an innovative LMR concept, being developed at Argonne National Laboratory, that fully exploits the inherent properties of liquid metal cooling and metallic fuel to achieve breakthroughs in economics and inherent safety. This paper describes key features and potential advantages of the IFR concept, technology development status, fuel cycle economics potential, and future development path.
Till, C.E.; Chang, Y.I. ); Lineberry, M.J. )
1990-01-01
Argonne National Laboratory, since 1984, has been developing the Integral Fast Reactor (IFR). This paper will describe the way in which this new reactor concept came about; the technical, public acceptance, and environmental issues that are addressed by the IFR; the technical progress that has been made; and our expectations for this program in the near term. 5 refs., 3 figs.
FAST - FREEDOM ASSEMBLY SEQUENCING TOOL PROTOTYPE
NASA Technical Reports Server (NTRS)
Borden, C. S.
1994-01-01
-Language and has been implemented on DEC VAX series computers running VMS. The program is distributed in executable form. The source code is also provided, but it cannot be compiled without the Tree Manipulation Based Routines (TMBR) package from the Jet Propulsion Laboratory, which is not currently available from COSMIC. The main memory requirement is based on the data used to drive the FAST program. All applications should easily run on an installation with 10Mb of main memory. FAST was developed in 1990 and is a copyrighted work with all copyright vested in NASA. DEC, VAX and VMS are trademarks of Digital Equipment Corporation.
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.
Fast controller for a unity-power-factor PWM rectifier
Eissa, M.O.; Leeb, S.B.; Verghese, G.C.; Stankovic, A.M.
1996-01-01
This paper presents an analog implementation of a fast controller for a unity-power-factor (UPF) PWM rectifier. The best settling times of many popular controllers for this type of converter are on the order of a few line cycles, corresponding to bandwidths under 20 Hz. The fast controller demonstrated in this paper can exercise control action at a rate comparable to the switching frequency rather than the line frequency. In order to accomplish this while maintaining unity power factor during steady-state operation, the fast controller employs a ripple-feedback cancellation scheme.
Efficient Kriging via Fast Matrix-Vector Products
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.
2008-01-01
Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.
StringFast: Fast Code to Compute CMB Power Spectra induced by Cosmic Strings
NASA Astrophysics Data System (ADS)
Foreman, Simon; Moss, Adam; Scott, Douglas
2011-06-01
StringFast implements a method for efficient computation of the C_l spectra induced by a network of strings, which is fast enough to be used in Markov Chain Monte Carlo analyses of future data. This code allows the user to calculate TT, EE, and BB power spectra (scalar [for TT and EE], vector, and tensor modes) for "wiggly" cosmic strings. StringFast uses the output of the public code CMBACT. The properties of the strings are described by four parameters: Gμ: dimensionless string tensionv: rms transverse velocity (as fraction of c)α: "wiggliness"ξ: comoving correlation length of the string network It is written as a Fortran 90 module.
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.
"Fast" Capitalism and "Fast" Schools: New Realities and New Truths.
ERIC Educational Resources Information Center
Robertson, Susan L.
This paper locates the phenomenon of self-managing schools within the framework of "fast capitalism" and identifies themes of organization central to fast capitalism, which are argued to also underpin the self-managing schools. "Fast capitalism" refers to the rapidly intensified integration of regionalized productive activities into the global…
NASA Astrophysics Data System (ADS)
Uvarov, I. V.; Postnikov, A. V.; Svetovoy, V. B.
2016-03-01
Lack of fast and strong microactuators is a well-recognized problem in MEMS community. Electrochemical actuators can develop high pressure but they are notoriously slow. Water electrolysis produced by short voltage pulses of alternating polarity can overcome the problem of slow gas termination. Here we demonstrate an actuation regime, for which the gas pressure is relaxed just for 10 μs or so. The actuator consists of a microchamber filled with the electrolyte and covered with a flexible membrane. The membrane bends outward when the pressure in the chamber increases. Fast termination of gas and high pressure developed in the chamber are related to a high density of nanobubbles in the chamber. The physical processes happening in the chamber are discussed so as problems that have to be resolved for practical applications of this actuation regime. The actuator can be used as a driving engine for microfluidics.
NASA Astrophysics Data System (ADS)
In these lectures we have described two different phenomena occuring in dissipative heavy ion collisions : neutron-proton asymmetry and fast fission. Neutron-proton asymmetry has provided us with an example of a fast collective motion. As a consequence quantum fluctuations can be observed. The observation of quantum or statistical fluctuations is directly connected to the comparison between the phonon energy and the temperature of the intrinsic system. This means that this mode might also provide a good example for the investigation of the transition between quantum and statistical fluctuations which might occur when the bombarding energy is raised above 10 MeV/A. However it is by no means sure that in this energy domain enough excitation energy can be put into the system in order to reach such high temperatures over the all system. The other interest in investigating neutron-proton asymmetry above 10 MeV/A is that the interaction time between the two incident nuclei will decrease. Consequently, if some collective motion should still be observed, it will be one of the last which can be seen. Fast fission corresponds on the contrary to long interaction times. The experimental indications are still rather weak and mainly consist of experimental data which cannot be understood in the framework of standard dissipative models. We have seen that a model which can describe both the entrance and the exit configuration gives this mechanism in a natural way and that the experimental data can, to a good extend, be explained. The nicest thing is probably that our old understanding of dissipative heavy ion collisions is not changed at all except for the problems that can now be understood in terms of fast fission. Nevertheless this area desserve further studies, especially on the experimental side to be sure that the consistent picture which we have on dissipative heavy ion collisions still remain coherent in the future.
Fast tracking hospital construction.
Quirk, Andrew
2013-03-01
Hospital leaders should consider four factors in determining whether to fast track a hospital construction project: Expectations of project length, quality, and cost. Whether decisions can be made quickly as issues arise. Their own time commitment to the project, as well as that of architects, engineers, construction managers, and others. The extent to which they are willing to share with the design and construction teams how and why decisions are being made. PMID:23513759
Chang, Y.I.
1988-01-01
The Integral Fast Reactor (IFR) is an innovative liquid metal reactor concept being developed at Argonne National Laboratory. It seeks to specifically exploit the inherent properties of liquid metal cooling and metallic fuel in a way that leads to substantial improvements in the characteristics of the complete reactor system. This paper describes the key features and potential advantages of the IFR concept, with emphasis on its safety characteristics. 3 refs., 4 figs., 1 tab.
Soha, Aria; Chiu, Mickey; Mannel, Eric; Stoll, Sean; Lynch, Don; Boose, Steve; Northacker, Dave; Alfred, Marcus; Lindesay, James; Chujo, Tatsuya; Inaba, Motoi; Nonaka, Toshihiro; Sato, Wataru; Sakatani, Ikumi; Hirano, Masahiro; Choi, Ihnjea
2014-01-15
This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of PHENIX Fast TOF group who have committed to participate in beam tests to be carried out during the FY2014 Fermilab Test Beam Facility program. The goals for this test beam experiment are to verify the timing performance of the two types of time-of-flight detector prototypes.
NASA Technical Reports Server (NTRS)
1996-01-01
The NASA Fast Track Study supports the efforts of a Special Study Group (SSG) made up of members of the Advanced Project Management Class number 23 (APM-23) that met at the Wallops Island Management Education Center from April 28 - May 8, 1996. Members of the Class expressed interest to Mr. Vem Weyers in having an input to the NASA Policy Document (NPD) 7120.4, that will replace NASA Management Institute (NMI) 7120.4, and the NASA Program/Project Management Guide. The APM-23 SSG was tasked with assisting in development of NASA policy on managing Fast Track Projects, defined as small projects under $150 million and completed within three years. 'Me approach of the APM-23 SSG was to gather data on successful projects working in a 'Better, Faster, Cheaper' environment, within and outside of NASA and develop the Fast Track Project section of the NASA Program/Project Management Guide. Fourteen interviews and four other data gathering efforts were conducted by the SSG, and 16 were conducted by Strategic Resources, Inc. (SRI), including five interviews at the Jet Propulsion Laboratory (JPL) and one at the Applied Physics Laboratory (APL). The interviews were compiled and analyzed for techniques and approaches commonly used to meet severe cost and schedule constraints.
NASA Astrophysics Data System (ADS)
Mar, Mark H.
1990-11-01
The purpose of this paper is to report the results of testing the fast Hartley transform (FHT) and comparing it with the fast Fourier transform (FFT). All the definitions and equations in this paper are quoted and cited from the series of references. The author of this report developed a FORTRAN program which computes the Hartley transform. He tested the program with a generalized electromagnetic pulse waveform and verified the results with the known value. Fourier analysis is an essential tool to obtain frequency domain information from transient time domain signals. The FFT is a popular tool to process many of today's audio and electromagnetic signals. System frequency response, digital filtering of signals, and signal power spectrum are the most practical applications of the FFT. However, the Fourier integral transform of the FFT requires computer resources appropriate for the complex arithmetic operations. On the other hand, the FHT can accomplish the same results faster and requires fewer computer resources. The FHT is twice as fast as the FFT, uses only half the computer resources, and so could be more useful than the FFT in typical applications such as spectral analysis, signal processing, and convolution. This paper presents a FORTRAN computer program for the FHT algorithm along with a brief description and compares the results and performance of the FHT and the FFT algorithms.
Mason, R.J.; Tabak, M.
1997-10-01
The Fast Ignitor is an alternate approach to ICF in which short pulse lasers are used to initiate burn at the surface of the compressed DT fuel. The aim is to avoid the need for careful central focusing of final shocks, and possibly to lower substantially the energy requirements for ignition. Ultimately, both goals may prove crucial to Science Based Stockpile Stewardship (SBSS). This will be the case should either emerging energetic needs, or funding difficulties render the presently planned radiative fusion approach to ignition with the NIF impractical. Ignition is a first step towards the achievement of substantial energy and neutron outputs for such Stewardship. For success with the Fast Ignitor, the laser energy must be efficiently deposited into megavolt electrons (suprathermal), which must, in turn, couple to the background ions within an alpha particle range. To understand the electron fuel coupling, we have used ANTHEM plasma simulation code to model the transport of hot electrons generated by an intense short pulse laser into plasma targets over a broad range of densities. Our study will spell out the acceleration and transport mechanisms active in the Fast Ignitor environment.
Johnstone, A M
2007-05-01
Adult humans often undertake acute fasts for cosmetic, religious or medical reasons. For example, an estimated 14% of US adults have reported using fasting as a means to control body weight and this approach has long been advocated as an intermittent treatment for gross refractory obesity. There are unique historical data sets on extreme forms of food restriction that give insight into the consequences of starvation or semi-starvation in previously healthy, but usually non-obese subjects. These include documented medical reports on victims of hunger strike, famine and prisoners of war. Such data provide a detailed account on how the body adapts to prolonged starvation. It has previously been shown that fasting for the biblical period of 40 days and 40 nights is well within the overall physiological capabilities of a healthy adult. However, the specific effects on the human body and mind are less clearly documented, either in the short term (hours) or in the longer term (days). This review asks the following three questions, pertinent to any weight-loss therapy, (i) how effective is the regime in achieving weight loss, (ii) what impact does it have on psychology? and finally, (iii) does it work long-term? PMID:17444963
Neighborhood fast food availability and fast food consumption.
Oexle, Nathalie; Barnes, Timothy L; Blake, Christine E; Bell, Bethany A; Liese, Angela D
2015-09-01
Recent nutritional and public health research has focused on how the availability of various types of food in a person's immediate area or neighborhood influences his or her food choices and eating habits. It has been theorized that people living in areas with a wealth of unhealthy fast-food options may show higher levels of fast-food consumption, a factor that often coincides with being overweight or obese. However, measuring food availability in a particular area is difficult to achieve consistently: there may be differences in the strict physical locations of food options as compared to how individuals perceive their personal food availability, and various studies may use either one or both of these measures. The aim of this study was to evaluate the association between weekly fast-food consumption and both a person's perceived availability of fast-food and an objective measure of fast-food presence - Geographic Information Systems (GIS) - within that person's neighborhood. A randomly selected population-based sample of eight counties in South Carolina was used to conduct a cross-sectional telephone survey assessing self-report fast-food consumption and perceived availability of fast food. GIS was used to determine the actual number of fast-food outlets within each participant's neighborhood. Using multinomial logistic regression analyses, we found that neither perceived availability nor GIS-based presence of fast-food was significantly associated with weekly fast-food consumption. Our findings indicate that availability might not be the dominant factor influencing fast-food consumption. We recommend using subjective availability measures and considering individual characteristics that could influence both perceived availability of fast food and its impact on fast-food consumption. If replicated, our findings suggest that interventions aimed at reducing fast-food consumption by limiting neighborhood fast-food availability might not be completely effective. PMID
Fast combinatorial optimization using generalized deterministic annealing
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Ghosh, Joydeep; Bovik, Alan C.
1993-08-01
Generalized Deterministic Annealing (GDA) is a useful new tool for computing fast multi-state combinatorial optimization of difficult non-convex problems. By estimating the stationary distribution of simulated annealing (SA), GDA yields equivalent solutions to practical SA algorithms while providing a significant speed improvement. Using the standard GDA, the computational time of SA may be reduced by an order of magnitude, and, with a new implementation improvement, Windowed GDA, the time improvements reach two orders of magnitude with a trivial compromise in solution quality. The fast optimization of GDA has enabled expeditious computation of complex nonlinear image enhancement paradigms, such as the Piecewise Constant (PICO) regression examples used in this paper. To validate our analytical results, we apply GDA to the PICO regression problem and compare the results to other optimization methods. Several full image examples are provided that show successful PICO image enhancement using GDA in the presence of both Laplacian and Gaussian additive noise.
Fast interferometric second harmonic generation microscopy
Bancelin, Stéphane; Couture, Charles-André; Légaré, Katherine; Pinsard, Maxime; Rivard, Maxime; Brown, Cameron; Légaré, François
2016-01-01
We report the implementation of fast Interferometric Second Harmonic Generation (I-SHG) microscopy to study the polarity of non-centrosymmetric structures in biological tissues. Using a sample quartz plate, we calibrate the spatially varying phase shift introduced by the laser scanning system. Compensating this phase shift allows us to retrieve the correct phase distribution in periodically poled lithium niobate, used as a model sample. Finally, we used fast interferometric second harmonic generation microscopy to acquire phase images in tendon. Our results show that the method exposed here, using a laser scanning system, allows to recover the polarity of collagen fibrils, similarly to standard I-SHG (using a sample scanning system), but with an imaging time about 40 times shorter. PMID:26977349
Fast blur removal via optical computing
NASA Astrophysics Data System (ADS)
Suo, Jinli; Yue, Tao; Dai, Qionghai
2014-11-01
Non-uniform image blur caused by camera shake or lens aberration is a common degradation in practical capture. Different from the uniform blur, non-uniform one is hard to deal with for its extremely high computation complexity as the blur model computation cannot be accelerated by Fast Fourier Transform (FFT). We propose to compute the most computational consuming operation, i.e. blur model calculation, by an optical computing system to realize fast and accurate non-uniform image deblur. A prototype system composed by a projector-camera system as well as a high dimensional motion platform (for motion blur) or original camera lens (for optics aberrations) is implemented. Our method is applied on a series of experiments, either on synthetic or real captured images, to verify its effectiveness and efficient.
Fabrication techniques for very fast diffractive lenses
NASA Technical Reports Server (NTRS)
Tai, Anthony M.; Marron, Joseph C.
1993-01-01
Aspheric lenses with arbitrary phase functions can be fabricated on thin light weight substrates via the binary optics fabrication technique. However, it is difficult and costly to fabricate a fast lens (f/number less than 1) for use as the shorter wavelengths. The pitch of the masks and the alignment accuracy must be very fine. For a large lens, the space-bandwidth product of the element can also become impractically large. In this paper, two alternate approaches for the fabrication of fast aspheric diffractive lenses are described. The first approach fabricates the diffractive lens interferometrically, utilizing a spherical wavefront to provide the optical power of the lens and a computer generated hologram to create the aspheric components. The second approach fabricates the aspheric diffractive lens in the form if a higher order kinoform which trades groove profile fidelity for coarser feature size. The design and implementation issues for these two fabrication techniques are discussed.
Fast multipole methods for particle dynamics
Kurzak, J.; Pettitt, B. M.
2008-01-01
The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems. PMID:19194526
ERIC Educational Resources Information Center
Charner, Ivan; Fraser, Bryna Shore
A study examined the employment of Hispanics in the fast-food industry. Data were obtained from a national survey of employees at 279 fast-food restaurants from seven companies in which 194 (4.2 percent) of the 4,660 respondents reported being Hispanic. Compared with the total sample, Hispanic fast-food employees were slightly less likely to be…
Fast-earth: A global image caching architecture for fast access to remote-sensing data
NASA Astrophysics Data System (ADS)
Talbot, B. G.; Talbot, L. M.
We introduce Fast-Earth, a novel server architecture that enables rapid access to remote sensing data. Fast-Earth subdivides a WGS-84 model of the earth into small 400 × 400 meter regions with fixed locations, called plats. The resulting 3,187,932,913 indexed plats are accessed with a rapid look-up algorithm. Whereas many traditional databases store large original images as a series by collection time, requiring long searches and slow access times for user queries, the Fast-Earth architecture enables rapid access. We have prototyped a system in conjunction with a Fast-Responder mobile app to demonstrate and evaluate the concepts. We found that new data could be indexed rapidly in about 10 minutes/terabyte, high-resolution images could be chipped in less than a second, and 250 kB image chips could be delivered over a 3G network in about 3 seconds. The prototype server implemented on a very small computer could handle 100 users, but the concept is scalable. Fast-Earth enables dramatic advances in rapid dissemination of remote sensing data for mobile platforms as well as desktop enterprises.
Measuring Fast Ion Losses in a Reversed Field Pinch Plasma
NASA Astrophysics Data System (ADS)
Bonofiglo, P. J.; Anderson, J. K.; Almagri, A. F.; Kim, J.; Clark, J.; Capecchi, W.; Sears, S. H.
2015-11-01
The reversed field pinch (RFP) provides a unique environment to study fast ion confinement and transport. The RFP's weak toroidal field, strong magnetic shear, and ability to enter a 3D state provide a wide range of dynamics to study fast ions. Core-localized, 25 keV fast ions are sourced into MST by a tangentially injected hydrogen/deuterium neutral beam. Neutral particle analysis and measured fusion neutron flux indicate enhanced fast ion transport in the plasma core. Past experiments point to a dynamic loss of fast ions associated with the RFP's transition to a 3D state and with beam-driven, bursting magnetic modes. Consequently, fast ion transport and losses in the RFP have garnered recent attention. Valuable information on fast-ion loss, such as energy and pitch distributions, are sought to provide a better understanding of the transport mechanisms at hand. We have constructed and implemented two fast ion loss detectors (FILDs) for use on MST. The FILDs have two, independent, design concepts: collecting particles as a function of v⊥ or with pitch greater than 0.8. In this work, we present our preliminary findings and results from our FILDs on MST. This research is supported by US DOE.
Fast evaluation of polarizable forces.
Wang, Wei; Skeel, Robert D
2005-10-22
Polarizability is considered to be the single most significant development in the next generation of force fields for biomolecular simulations. However, the self-consistent computation of induced atomic dipoles in a polarizable force field is expensive due to the cost of solving a large dense linear system at each step of a simulation. This article introduces methods that reduce the cost of computing the electrostatic energy and force of a polarizable model from about 7.5 times the cost of computing those of a nonpolarizable model to less than twice the cost. This is probably sufficient for the routine use of polarizable forces in biomolecular simulations. The reduction in computing time is achieved by an efficient implementation of the particle-mesh Ewald method, an accurate and robust predictor based on least-squares fitting, and non-stationary iterative methods whose fast convergence is accelerated by a simple preconditioner. Furthermore, with these methods, the self-consistent approach with a larger timestep is shown to be faster than the extended Lagrangian approach. The use of dipole moments from previous timesteps to calculate an accurate initial guess for iterative methods leads to an energy drift, which can be made acceptably small. The use of a zero initial guess does not lead to perceptible energy drift if a reasonably strict convergence criterion for the iteration is imposed. PMID:16268681
Simplified fast neutron dosimeter
Sohrabi, Mehdi
1979-01-01
Direct fast-neutron-induced recoil and alpha particle tracks in polycarbonate films may be enlarged for direct visual observation and automated counting procedures employing electrochemical etching techniques. Electrochemical etching is, for example, carried out in a 28% KOH solution at room temperature by applying a 2000 V peak-to-peak voltage at 1 kHz frequency. Such recoil particle amplification can be used for the detection of wide neutron dose ranges from 1 mrad. to 1000 rads. or higher, if desired.
Snell, A.H.
1957-12-01
This patent relates to a reactor and process for carrying out a controlled fast neutron chain reaction. A cubical reactive mass, weighing at least 920 metric tons, of uranium metal containing predominantly U/sup 238/ and having a U/sup 235/ content of at least 7.63% is assembled and the maximum neutron reproduction ratio is limited to not substantially over 1.01 by insertion and removal of a varying amount of boron, the reactive mass being substantially freed of moderator.
Mason, R.J.; Tabak, M.
1997-10-01
The Fast Ignitor is an alternate approach to ICF in which short pulse lasers are used to initiate burn at the surface of the compressed DT fuel. The aim is to avoid the need for careful central focusing of final shocks, and possibly to lower substantially the energy requirements for ignition. Ultimately, both goals may prove crucial to Science Based Stockpile Stewardship (SBSS). This will be the case should either emerging energetic needs, or finding difficulties render the presently planned radiative fusion approach to ignition with the NIF impractical. Ignition is a first step towards the achievement of substantial energy and neutron outputs for such Stewardship.
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.; Berry, Ray A.
1999-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream.
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.; Berry, R.A.
1999-08-10
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream. 8 figs.
DeLuca, P.M. Jr.; Pearson, D.W.
1992-01-01
This progress report concentrates on two major areas of dosimetry research: measurement of fast neutron kerma factors for several elements for monochromatic and white spectrum neutron fields and determination of the response of thermoluminescent phosphors to various ultra-soft X-ray energies and beta-rays. Dr. Zhixin Zhou from the Shanghai Institute of Radiation Medicine, People's Republic of China brought with him special expertise in the fabrication and use of ultra-thin TLD materials. Such materials are not available in the USA. The rather unique properties of these materials were investigated during this grant period.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Schiper, Andre; Stephenson, Pat
1990-01-01
A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.
1995 Fast Track: cost reduction and improvement.
Panzer, R J; Tuttle, D N; Kolker, R M
1997-01-01
To respond to a cost reduction crisis, Strong Memorial Hospital implemented an aggressively managed program of accelerated improvement teams. "Fast-track" teams combined the application of many management tools (total quality management, breakthrough thinking, reengineering, etc.) into one problem-solving process. Teams and managers were charged to work on specific cost reduction strategies. Teams were given additional instruction on interpersonal skills such as communication, teamwork, and leadership. Paradoxically, quality improvement in our hospital was advanced more through this effort at cost reduction than had previously been done in the name of quality itself. PMID:10176411
NASA Technical Reports Server (NTRS)
Wojciechowski, Bogdan V. (Inventor); Pegg, Robert J. (Inventor)
2003-01-01
A fast-acting valve includes an annular valve seat that defines an annular valve orifice between the edges of the annular valve seat, an annular valve plug sized to cover the valve orifice when the valve is closed, and a valve-plug holder for moving the annular valve plug on and off the annular valve seat. The use of an annular orifice reduces the characteristic distance between the edges of the valve seat. Rather than this distance being equal to the diameter of the orifice, as it is for a conventional circular orifice, the characteristic distance equals the distance between the inner and outer radii (for a circular annulus). The reduced characteristic distance greatly reduces the gap required between the annular valve plug and the annular valve seat for the valve to be fully open, thereby greatly reducing the required stroke and corresponding speed and acceleration of the annular valve plug. The use of a valve-plug holder that is under independent control to move the annular valve plug between its open and closed positions is important for achieving controllable fast operation of the valve.
Mason, R.J.; Tabak, M.
1997-10-01
The Fast Ignitor is an alternate approach to ICF in which short pulse lasers are used to initiate burn at the surface of the compressed DT fuel. The aim is to avoid the need for careful central focussing of final shocks, and possibly to lower substantially the energy requirements for ignition. Ultimately, both goals may prove crucial to Stockpile Stewardship. For success with the Fast Ignitor, the laser energy must be efficiently deposited into megavolt electrons, which must, in turn, couple to the background ions within an alpha particle range. To understand this coupling, we have used ANTHEM plasma simulation code to model the transport of hot electrons generated by an intense ({ge} 3 x 10{sup 18} W/cm{sup 2}) short pulse 1.06 {mu}m laser into plasma targets over a broad range of densities (0.35 to 10{sup 4} x n{sub crit}). Ponderomotive effects are included as a force on the cold background and hot emission electrons of the form, F{sub h,c} = -({omega}{sup 2}{sub Ph,c}/2{omega}{sup 2}){del}I, in which I is the laser intensity and {omega}{sub p}{sup 2} = 4{pi}e{sup 2}n/m{sub 0}{gamma} with m{sub 0} the electron rest mass.
Nguyen, M.N.; /SLAC
2007-06-18
As part of an improvement project on the linear accelerator at SLAC, it was necessary to replace the original thyratron trigger generator, which consisted of two chassis, two vacuum tubes, and a small thyratron. All solid-state, fast rise, and high voltage thyratron drivers, therefore, have been developed and built for the 244 klystron modulators. The rack mounted, single chassis driver employs a unique way to control and generate pulses through the use of an asymmetric SCR, a PFN, a fast pulse transformer, and a saturable reactor. The resulting output pulse is 2 kV peak into 50 {Omega} load with pulse duration of 1.5 {mu}s FWHM at 180 Hz. The pulse risetime is less than 40 ns with less than 1 ns jitter. Various techniques are used to protect the SCR from being damaged by high voltage and current transients due to thyratron breakdowns. The end-of-line clipper (EOLC) detection circuit is also integrated into this chassis to interrupt the modulator triggering in the event a high percentage of line reflections occurred.
Bradley, Robert K; Roberts, Adam; Smoot, Michael; Juvekar, Sudeep; Do, Jaeyoung; Dewey, Colin; Holmes, Ian; Pachter, Lior
2009-05-01
We describe a new program for the alignment of multiple biological sequences that is both statistically motivated and fast enough for problem sizes that arise in practice. Our Fast Statistical Alignment program is based on pair hidden Markov models which approximate an insertion/deletion process on a tree and uses a sequence annealing algorithm to combine the posterior probabilities estimated from these models into a multiple alignment. FSA uses its explicit statistical model to produce multiple alignments which are accompanied by estimates of the alignment accuracy and uncertainty for every column and character of the alignment--previously available only with alignment programs which use computationally-expensive Markov Chain Monte Carlo approaches--yet can align thousands of long sequences. Moreover, FSA utilizes an unsupervised query-specific learning procedure for parameter estimation which leads to improved accuracy on benchmark reference alignments in comparison to existing programs. The centroid alignment approach taken by FSA, in combination with its learning procedure, drastically reduces the amount of false-positive alignment on biological data in comparison to that given by other methods. The FSA program and a companion visualization tool for exploring uncertainty in alignments can be used via a web interface at http://orangutan.math.berkeley.edu/fsa/, and the source code is available at http://fsa.sourceforge.net/. PMID:19478997
Slow and fast light in semiconductors
NASA Astrophysics Data System (ADS)
Sedgwick, Forrest Grant
Slow and fast light are the propagation of optical signals at group velocities below and above the speed of light in a given medium. There has been great interest in the use of nonlinear optics to engineer slow and fast light dispersion for applications in optical communications and radio-frequency or microwave photonics. Early results in this field were primarily confined to dilute atomic systems. While these results were impressive, they had two major barriers to practical application. First, the wavelengths were not compatible with fiber optic telecommunications. More importantly, the bandwidth obtainable in these experiments was inherently low; 100 kHz or less. Within the last five years slow and fast light effects have been observed and engineered in a much wider variety of systems. In this work, we detail our efforts to realize slow and fast light in semiconductor systems. There are three primary advantages of semiconductor systems: fiber-compatible wavelengths, larger bandwidth, and simplification of integration with other optical components. In this work we will explore three different types of physical mechanisms for implementing slow and fast light. The first is electromagnetically induced transparency (EIT). In transporting this process to semiconductors, we initially turn our attention to quantum dots or "artificial atoms". We present simulations of a quantum dot EIT-based device within the context of an optical communications link and we derive results which are generally applicable to a broad class of slow light devices. We then present experimental results realizing EIT in quantum wells by using long-lived electron spin coherence. The second mechanism we will explore is coherent population oscillations (CPO), also known as carrier density pulsations (CDP). We examine for the first time how both slow and fast light may be achieved in a quantum well semiconductor optical amplifier (SOA) while operating in the gain regime. Again, we simulate the device
A fast track to IAIMS: the Vanderbilt University strategy.
Stead, W W; Baker, W; Harris, T R; Hodges, T M; Sittig, D F
1992-01-01
In July 1991, Vanderbilt University Medical Center (VUMC) initiated a fast track approach to the implementation of an Integrated Academic Information Management System (IAIMS). The fast track approach has four elements: 1) an integrated organizational structure combining various operational information management units and the academic informatics program into a single entity to enhance efficiency; 2) technology transfer and network access to remote resources in preference to de novo development; 3) parallel IAIMS planning and infrastructure construction; 4) restriction of the scope of the initial IAIMS to permit a manageable implementation project. The fast track approach is intended to provide a truly functional IAIMS within a time period (7 years) associated with other major construction projects such as the building of a replacement hospital. PMID:1336415
Methods for performing fast discrete curvelet transforms of data
Candes, Emmanuel; Donoho, David; Demanet, Laurent
2010-11-23
Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.
A fast ring imaging detector for the CLEO upgrade
NASA Astrophysics Data System (ADS)
Artuso, M.; Mukhin, Y.
1994-04-01
Different implementations of fast RICH schemes for a symmetric or asymmetric e +e - collider at the ϒ(4S) have been studied. The proposed solutions provide a 4σ π-K separation at the highest particle momenta from B decays and allow a sufficient number of photoelectrons for good pattern recognition capability.
Fast Food Jobs. National Study of Fast Food Employment.
ERIC Educational Resources Information Center
Charner, Ivan; Fraser, Bryna Shore
A study examined employment in the fast-food industry. The national survey collected data from employees at 279 fast-food restaurants from seven companies. Female employees outnumbered males by two to one. The ages of those fast-food employees in the survey sample ranged from 14 to 71, with fully 70 percent being in the 16- to 20-year-old age…
Discrete implementations of scale transform
NASA Astrophysics Data System (ADS)
Djurdjanovic, Dragan; Williams, William J.; Koh, Christopher K.
1999-11-01
Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.
FastPM: a new scheme for fast simulations of dark matter and halos
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-08-01
We introduce FastPM, a highly-scalable approximated particle mesh N-body solver, which implements the particle mesh (PM) scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a 2-dimensional domain decomposing scheme, FastPM scales extremely well with a very large number of CPUs. In contrast to COmoving-LAgrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FastPM with different number of steps (Ns) and force resolution factor (B) against 3 benchmarks: halo mass function from Friends of Friends halo finder, halo and dark matter power spectrum, and cross correlation coefficient (or stochasticity), relative to a high resolution TreePM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross correlation coefficient, for many applications FastPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched halos the stochasticity remains low even for Ns = 5. FastPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FastPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
Calorie Labeling, Fast Food Purchasing and Restaurant Visits
Elbel, Brian; Mijanovich, Tod; Dixon, Beth; Abrams, Courtney; Weitzman, Beth; Kersh, Rogan; Auchincloss, Amy H.; Ogedegbe, Gbenga
2013-01-01
Objective Obesity is a pressing public health problem without proven population-wide solutions. Researchers sought to determine whether a city-mandated policy requiring calorie labeling at fast food restaurants was associated with consumer awareness of labels, calories purchased and fast food restaurant visits. Design and Methods Difference-in-differences design, with data collected from consumers outside fast food restaurants and via a random digit dial telephone survey, before (December 2009) and after (June 2010) labeling in Philadelphia (which implemented mandatory labeling) and Baltimore (matched comparison city). Measures included: self-reported use of calorie information, calories purchased determined via fast food receipts, and self-reported weekly fast-food visits. Results The consumer sample was predominantly Black (71%), and high school educated (62%). Post-labeling, 38% of Philadelphia consumers noticed the calorie labels for a 33 percentage point (p<.001) increase relative to Baltimore. Calories purchased and number of fast food visits did not change in either city over time. Conclusions While some consumer reports noticing and using calorie information, no population level changes were noted in calories purchased or fast food visits. Other controlled studies are needed to examine the longer term impact of labeling as it becomes national law. PMID:24136905
Fast approximate motif statistics.
Nicodème, P
2001-01-01
We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175
Davis, F.J.; Hurst, G.S.; Reinhardt, P.W.
1959-08-18
An improved proton recoil spectrometer for determining the energy spectrum of a fast neutron beam is described. Instead of discriminating against and thereby"throwing away" the many recoil protons other than those traveling parallel to the neutron beam axis as do conventional spectrometers, this device utilizes protons scattered over a very wide solid angle. An ovoidal gas-filled recoil chamber is coated on the inside with a scintillator. The ovoidal shape of the sensitive portion of the wall defining the chamber conforms to the envelope of the range of the proton recoils from the radiator disposed within the chamber. A photomultiplier monitors the output of the scintillator, and a counter counts the pulses caused by protons of energy just sufficient to reach the scintillator.
Batzer, T.H.; Cummings, D.B.; Ryan, J.F.
1962-05-22
A high-current, fast-acting switch is designed for utilization as a crowbar switch in a high-current circuit such as used to generate the magnetic confinement field of a plasma-confining and heat device, e.g., Pyrotron. The device particularly comprises a cylindrical housing containing two stationary, cylindrical contacts between which a movable contact is bridged to close the switch. The movable contact is actuated by a differential-pressure, airdriven piston assembly also within the housing. To absorb the acceleration (and the shock imparted to the device by the rapidly driven, movable contact), an adjustable air buffer assembly is provided, integrally connected to the movable contact and piston assembly. Various safety locks and circuit-synchronizing means are also provided to permit proper cooperation of the invention and the high-current circuit in which it is installed. (AEC)
NASA Astrophysics Data System (ADS)
Martínez-Garaot, S.; Ruschhaupt, A.; Gillet, J.; Busch, Th.; Muga, J. G.
2015-10-01
We work out the theory and applications of a fast quasiadiabatic approach to speed up slow adiabatic manipulations of quantum systems by driving a control parameter as near to the adiabatic limit as possible over the entire protocol duration. We find characteristic time scales, such as the minimal time to achieve fidelity 1, and the optimality of the approach within the iterative superadiabatic sequence. Specifically, we show that the population inversion in a two-level system, the splitting and cotunneling of two-interacting bosons, and the stirring of a Tonks-Girardeau gas on a ring to achieve mesoscopic superpositions of many-body rotating and nonrotating states can be significantly speeded up.
Bender, M.; Bennett, F.K.; Kuckes, A.F.
1963-09-17
A fast-acting electric switch is described for rapidly opening a circuit carrying large amounts of electrical power. A thin, conducting foil bridges a gap in this circuit and means are provided for producing a magnetic field and eddy currents in the foil, whereby the foil is rapidly broken to open the circuit across the gap. Advantageously the foil has a hole forming two narrow portions in the foil and the means producing the magnetic field and eddy currents comprises an annular coil having its annulus coaxial with the hole in the foil and turns adjacent the narrow portions of the foil. An electrical current flows through the coil to produce the magnetic field and eddy currents in the foil. (AEC)
Fast Censored Linear Regression
HUANG, YIJIAN
2013-01-01
Weighted log-rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics-guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided. PMID:24347802
Fast image restoration without boundary artifacts.
Reeves, Stanley J
2005-10-01
Fast Fourier transform (FFT)-based restorations are fast, but at the expense of assuming that the blurring and deblurring are based on circular convolution. Unfortunately, when the opposite sides of the image do not match up well in intensity, this assumption can create significant artifacts across the image. If the pixels outside the measured image window are modeled as unknown values in the restored image, boundary artifacts are avoided. However, this approach destroys the structure that makes the use of the FFT directly applicable, since the unknown image is no longer the same size as the measured image. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT. We propose a new restoration method for the unknown boundary approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from a modified FFT-based approach. The other restoration involves a set of unknowns whose number equals that of the unknown boundary values. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists. PMID:16238051
A fast SEQUEST cross correlation algorithm.
Eng, Jimmy K; Fischer, Bernd; Grossmann, Jonas; Maccoss, Michael J
2008-10-01
The SEQUEST program was the first and remains one of the most widely used tools for assigning a peptide sequence within a database to a tandem mass spectrum. The cross correlation score is the primary score function implemented within SEQUEST and it is this score that makes the tool particularly sensitive. Unfortunately, this score is computationally expensive to calculate, and thus, to make the score manageable, SEQUEST uses a less sensitive but fast preliminary score and restricts the cross correlation to just the top 500 peptides returned by the preliminary score. Classically, the cross correlation score has been calculated using Fast Fourier Transforms (FFT) to generate the full correlation function. We describe an alternate method of calculating the cross correlation score that does not require FFTs and can be computed efficiently in a fraction of the time. The fast calculation allows all candidate peptides to be scored by the cross correlation function, potentially mitigating the need for the preliminary score, and enables an E-value significance calculation based on the cross correlation score distribution calculated on all candidate peptide sequences obtained from a sequence database. PMID:18774840
Adaptive line enhancers for fast acquisition
NASA Technical Reports Server (NTRS)
Yeh, H.-G.; Nguyen, T. M.
1994-01-01
Three adaptive line enhancer (ALE) algorithms and architectures - namely, conventional ALE, ALE with double filtering, and ALE with coherent accumulation - are investigated for fast carrier acquisition in the time domain. The advantages of these algorithms are their simplicity, flexibility, robustness, and applicability to general situations including the Earth-to-space uplink carrier acquisition and tracking of the spacecraft. In the acquisition mode, these algorithms act as bandpass filters; hence, the carrier-to-noise ratio (CNR) is improved for fast acquisition. In the tracking mode, these algorithms simply act as lowpass filters to improve signal-to-noise ratio; hence, better tracking performance is obtained. It is not necessary to have a priori knowledge of the received signal parameters, such as CNR, Doppler, and carrier sweeping rate. The implementation of these algorithms is in the time domain (as opposed to the frequency domain, such as the fast Fourier transform (FFT)). The carrier frequency estimation can be updated in real time at each time sample (as opposed to the batch processing of the FFT). The carrier frequency to be acquired can be time varying, and the noise can be non-Gaussian, nonstationary, and colored.
Maximoff, Sergey N.; Head-Gordon, Martin P.
2009-01-01
A chemicurrent is a flux of fast (kinetic energy ≳ 0.5−1.3 eV) metal electrons caused by moderately exothermic (1−3 eV) chemical reactions over high work function (4−6 eV) metal surfaces. In this report, the relation between chemicurrent and surface chemistry is elucidated with a combination of top-down phenomenology and bottom-up atomic-scale modeling. Examination of catalytic CO oxidation, an example which exhibits a chemicurrent, reveals 3 constituents of this relation: The localization of some conduction electrons to the surface via a reduction reaction, 0.5 O2 + δe− → Oδ− (Red); the delocalization of some surface electrons into a conduction band in an oxidation reaction, Oδ− + CO → CO2δ− → CO2 + δe− (Ox); and relaxation without charge transfer (Rel). Juxtaposition of Red, Ox, and Rel produces a daunting variety of metal electronic excitations, but only those that originate from CO2 reactive desorption are long-range and fast enough to dominate the chemicurrent. The chemicurrent yield depends on the universality class of the desorption process and the distribution of the desorption thresholds. This analysis implies a power-law relation with exponent 2.66 between the chemicurrent and the heat of adsorption, which is consistent with experimental findings for a range of systems. This picture also applies to other oxidation-reduction reactions over high work function metal surfaces. PMID:19561296
Spatial kinetics in fast reactors
NASA Astrophysics Data System (ADS)
Seleznev, E. F.; Belov, A. A.; Panova, I. S.; Matvienko, I. P.; Zhukov, A. M.
2013-12-01
The analysis of the solution to the spatial nonstationary equation of neutron transport is presented by the example of a fast reactor. Experiments in spatial kinetics conducted recently at the complex of critical assemblies (fast physical stand) and computations of their data using the TIMER code (for solving the nonstationary equation in multidimensional diffusion approximation for direct and inverse problems of reactor kinetics) have shown that kinetics of fast reactors substantially differs from kinetics of thermal reactors. The difference is connected with influence of the delayed neutron spectrum on rates of the process in a fast reactor.
Fast planar segmentation of depth images
NASA Astrophysics Data System (ADS)
Javan Hemmat, Hani; Pourtaherian, Arash; Bondarev, Egor; de With, Peter H. N.
2015-03-01
One of the major challenges for applications dealing with the 3D concept is the real-time execution of the algorithms. Besides this, for the indoor environments, perceiving the geometry of surrounding structures plays a prominent role in terms of application performance. Since indoor structures mainly consist of planar surfaces, fast and accurate detection of such features has a crucial impact on quality and functionality of the 3D applications, e.g. decreasing model size (decimation), enhancing localization, mapping, and semantic reconstruction. The available planar-segmentation algorithms are mostly developed using surface normals and/or curvatures. Therefore, they are computationally expensive and challenging for real-time performance. In this paper, we introduce a fast planar-segmentation method for depth images avoiding surface normal calculations. Firstly, the proposed method searches for 3D edges in a depth image and finds the lines between identified edges. Secondly, it merges all the points on each pair of intersecting lines into a plane. Finally, various enhancements (e.g. filtering) are applied to improve the segmentation quality. The proposed algorithm is capable of handling VGA-resolution depth images at a 6 FPS frame-rate with a single-thread implementation. Furthermore, due to the multi-threaded design of the algorithm, we achieve a factor of 10 speedup by deploying a GPU implementation.
Optimisation of vectorisation property: A comparative study for a secondary amphipathic peptide.
Konate, Karidia; Lindberg, Mattias F; Vaissiere, Anaïs; Jourdan, Carole; Aldrian, Gudrun; Margeat, Emmanuel; Deshayes, Sébastien; Boisguerin, Prisca
2016-07-25
RNA interference provides a powerful technology for specific gene silencing. Therapeutic applications of small interfering RNA (siRNA) however require efficient vehicles for stable complexation and intracellular delivery. In order to enhance their cell delivery, short amphipathic peptides called cell-penetrating peptides (CPPs) have been intensively developed for the last two decades. In this context, the secondary amphipathic peptide CADY has shown to form stable siRNA complexes and to improve their cellular uptake independent of the endosomal pathway. In the present work, we have described the parameters influencing CADY nanoparticle formation (buffers, excipients, presence of serum, etc.), and have followed in details the CPP:siRNA self-assembly. Once optimal conditions were determined, we have compared the ability of seven different CADY analogues to form siRNA-loaded nanoparticles compared to CADY:siRNA. First of all, we were able to show by biophysical methods that structural polymorphism (α-helix) is an important prerequisite for stable nanoparticle formation independently of occurring sequence mutations. Luciferase assays revealed that siRNA complexed to CADY-K (shorter version) shows better knock-down efficiency on Neuro2a-Luc(+) and B16-F10-Luc(+) cells compared to CADY:siRNA. Altogether, CADY-K is an ideal candidate for further application especially with regards to ex vivo or in vivo applications. PMID:27224007
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
Verification of New Floating Capabilities in FAST v8: Preprint
Wendt, F.; Robertson, A.; Jonkman, J.; Hayman, G.
2015-01-01
In the latest release of NREL's wind turbine aero-hydro-servo-elastic simulation software, FAST v8, several new capabilities and major changes were introduced. FAST has been significantly altered to improve the simulator's modularity and to include new functionalities in the form of modules in the FAST v8 framework. This paper is focused on the improvements made for the modeling of floating offshore wind systems. The most significant change was to the hydrodynamic load calculation algorithms, which are embedded in the HydroDyn module. HydroDyn is now capable of applying strip-theory (via an extension of Morison's equation) at the member level for user-defined geometries. Users may now use a strip-theory-only approach for applying the hydrodynamic loads, as well as the previous potential-flow (radiation/diffraction) approach and a hybrid combination of both methods (radiation/diffraction and the drag component of Morison's equation). Second-order hydrodynamic implementations in both the wave kinematics used by the strip-theory solution and the wave-excitation loads in the potential-flow solution were also added to HydroDyn. The new floating capabilities were verified through a direct code-to-code comparison. We conducted a series of simulations of the International Energy Agency Wind Task 30 Offshore Code Comparison Collaboration Continuation (OC4) floating semisubmersible model and compared the wind turbine response predicted by FAST v8, the corresponding FAST v7 results, and results from other participants in the OC4 project. We found good agreement between FAST v7 and FAST v8 when using the linear radiation/diffraction modeling approach. The strip-theory-based approach inherently differs from the radiation/diffraction approach used in FAST v7 and we identified and characterized the differences. Enabling the second-order effects significantly improved the agreement between FAST v8 and the other OC4 participants.
Fast Feedback in Classroom Practice
ERIC Educational Resources Information Center
Emmett, Katrina; Klaassen, Kees; Eijkelhof, Harrie
2009-01-01
In this article we describe one application of the fast feedback method (see Berg 2003 "Aust. Sci. Teach. J." 28-34) in secondary mechanics education. Two teachers tried out a particular sequence twice, in consecutive years, once with and once without the use of fast feedback. We found the method to be successful, and the data that we obtained…
Fast Fuzzy Arithmetic Operations
NASA Technical Reports Server (NTRS)
Hampton, Michael; Kosheleva, Olga
1997-01-01
In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).
Responder fast steering mirror
NASA Astrophysics Data System (ADS)
Bullard, Andrew; Shawki, Islam
2013-09-01
Raytheon Space and Airborne Systems (SAS) has designed, built and tested a 3.3-inch diameter fast steering mirror (FSM) for space application. This 2-axis FSM operates over a large angle (over 10 degree range), has a very high servo bandwidth (over 3.3 Khz closed loop bandwidth), has nanoradian-class noise, and is designed to support microradian class line of sight accuracy. The FSM maintains excellent performance over large temperature ranges (which includes wave front error) and has very high reliability with the help of fully redundant angle sensors and actuator circuits. The FSM is capable of achieving all its design requirements while also being reaction-compensated. The reaction compensation is achieved passively and does not need a separate control loop. The FSM has undergone various environmental testing which include exported forces and torques and thermal vacuum testing that support the FSM design claims. This paper presents the mechanical design and test results of the mechanism which satisfies the rigorous vacuum and space application requirements.
Responder fast steering mirror
NASA Astrophysics Data System (ADS)
Bullard, Andrew; Shawki, Islam
2013-10-01
Raytheon Space and Airborne Systems (SAS) has designed, built and tested a 3.3-inch diameter fast steering mirror (FSM) for space application. This 2-axis FSM operates over a large angle (over 10 degree range), has a very high servo bandwidth (over 3.3 Khz closed loop bandwidth), has nanoradian-class noise, and is designed to support microradian class line of sight accuracy. The FSM maintains excellent performance over large temperature ranges (which includes wave front error) and has very high reliability with the help of fully redundant angle sensors and actuator circuits. The FSM is capable of achieving all its design requirements while also being reaction-compensated. The reaction compensation is achieved passively and does not need a separate control loop. The FSM has undergone various environmental testing which include exported forces and torques and thermal vacuum testing that support the FSM design claims. This paper presents the mechanical design and test results of the mechanism which satisfies the rigorous vacuum and space application requirements.
NASA Astrophysics Data System (ADS)
Graham, Jeffrey
2005-10-01
A bolometer with microsecond scale response time is under construction for the Caltech spheromak experiment to measure radiation from a ˜20 μs duration plasma discharge emitting ˜10^2---10^3 kW/m^2. A gold film several micrometers thick absorbs the radiation, heats up, and the consequent change in resistance can be measured. The film itself is vacuum deposited upon a glass slide. Several geometries for the film are under consideration to optimize the amount of radiation absorbed, the response time and the signal-to-noise ratio. We measure the change in voltage across the film for a known current driven through it; a square pulse (3---30A, ˜20 μs) is used to avoid Joule heating. Results from prototypes tested with a UV flashlamp will be presented. After optimizing the bolometer design, the final vacuum-compatible diagnostic would consist of a plasma-facing bolometer and a reference in a camera obscura. This device could provide a design for fast resistive bolometry.
Implementing a Corporate Weblog for SAP
NASA Astrophysics Data System (ADS)
Broß, Justus; Quasthoff, Matthias; MacNiven, Sean; Zimmermann, Jürgen; Meinel, Christoph
After web 2.0 technologies experienced a phenomenal expansion and high acceptance among private users, considerations are now intensified to assess whether they can be equally applicable, beneficially employed and meaningfully implemented in an entrepreneurial context. The fast-paced rise of social software like weblogs or wikis and the resulting new form of communication via the Internet is however observed ambiguously in the corporate environment. This is why the particular choice of the platform or technology to be implemented in this field is strongly dependent on its future business case and field of deployment and should therefore be carefully considered beforehand, as this paper strongly suggests.
Notes on implementation of sparsely distributed memory
NASA Technical Reports Server (NTRS)
Keeler, J. D.; Denning, P. J.
1986-01-01
The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.
Opportunity's Fast Progress Southward
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Opportunity's Traverse from Landing through Sol 413 Opportunity's Fast Progress Southward
As of the Mars Exploration Rover Opportunity's 413th martian day, or sol, (March 23, 2005), the robot had driven a total of 4.62 kilometers (2.87 miles) since. The red line on this image traces the rover's route. The base image is a mosaic combining images from the Mars Observer Camera on NASA's Mars Global Surveyor orbiter, the Thermal Emission Imaging System on NASA's Mars Odyssey orbiter, and Opportunity's own Descent Image Motion Estimation System.
The rover has been making rapid progress southward since it finished examining its jettisoned heat shield on sol 357 (Jan. 24, 2005, one year after landing). Scientists are eager for Opportunity to reach an area to the south called the 'Etched Terrain,' which appears mottled in the map's base images and might offer access to different layers of bedrock than what the rover has seen so far. See figure 1.
As of the Mars Exploration Rover Opportunity's 414th martian day, or sol, (March 24, 2005), the robot had driven a total of 4.81 kilometers (2.99 miles) since landing. In this two-month period, Opportunity drove 2.69 kilometers (1.67 miles). As landmarks along the route, it used craters that the rover team informally named for ships of historic voyages of exploration. See figure 2. Figures 1 and 2 are traverse maps overlaid on a mosaic of images from NASA's Mars Global Surveyor and Mars Odyssey orbiters and from Opportunity's descent camera. The scale bar in figure 1 at lower left is 2 kilometers (1.24 miles) long and the scale bar in figure 2 is 1 kilometer (0.62 mile) long.
Fast axis servo for the fast and precise machining of non-rotational symmetric optics
NASA Astrophysics Data System (ADS)
Tian, Fujing; Yin, Ziqiang; Li, Shengyi
2014-08-01
A new long range tool servo-fast axis servo is developed, which is used for fabricating the non-rotational symmetric optics surface with millimeters' sag. The mechanism design, motion modeling and development of FAS device were studied. The FAS consists of a linear motor, aerostatic bearings, high-resolution encoder and a motion controller. A control strategy consists of a proportional, integral and derivative (PID) feedback controller and velocity/acceleration feedforward controller is implemented to accommodate the system control performance. Experimental tests have been carried out to verify the performance of the FAS system.
Fast food: unfriendly and unhealthy.
Stender, S; Dyerberg, J; Astrup, A
2007-06-01
Although nutrition experts might be able to navigate the menus of fast-food restaurant chains, and based on the nutritional information, compose apparently 'healthy' meals, there are still many reasons why frequent fast-food consumption at most chains is unhealthy and contributes to weight gain, obesity, type 2 diabetes and coronary artery disease. Fast food generally has a high-energy density, which, together with large portion sizes, induces over consumption of calories. In addition, we have found it to be a myth that the typical fast-food meal is the same worldwide. Chemical analyses of 74 samples of fast-food menus consisting of French fries and fried chicken (nuggets/hot wings) bought in McDonalds and KFC outlets in 35 countries in 2005-2006 showed that the total fat content of the same menu varies from 41 to 65 g at McDonalds and from 42 to 74 g at KFC. In addition, fast food from major chains in most countries still contains unacceptably high levels of industrially produced trans-fatty acids (IP-TFA). IP-TFA have powerful biological effects and may contribute to increased weight gain, abdominal obesity, type 2 diabetes and coronary artery disease. The food quality and portion size need to be improved before it is safe to eat frequently at most fast-food chains. PMID:17452996
[Fast food promotes weight gain].
Stender, Steen; Dyerberg, Jørn; Astrup, Arne V
2007-05-01
The total amounts of fat in a fast food menu consisting of French fries and fried Chicken Nuggets from McDonald's and KFC, respectively, bought in 35 different countries vary from 41 to 71 gram. In most countries the menu contained unacceptably high amounts of industrially-produced trans fat which contributes to an increased risk of ischaemic heart disease, weight gain, abdominal fat accumulation and type 2 diabetes. The quality of the ingredients in fast food ought to be better and the size of the portions smaller and less energy-dense so that frequent fast food meals do not increase the risk of obesity and diseases among customers. PMID:17537359
[Artificial nutrition and preoperative fasting].
Francq, B; Sohawon, S; Perlot, I; Sekkat, H; Noordally, S O
2012-01-01
Preoperative fasting is a currently adopted measure since Mendelson's report pertaining to aspiration pneumonia as a cause of death following general anesthesia. From a metabolic point of view fasting is detrimental because surgery in itself causes a state of hypercatabolism and hyperglycemia as a result of insulinresistance. Preoperative fasting has become almost obsolete in certain elective surgical procedures. In these cases the use of clear liquids is now well established and this paper focuses on the safe use of clear fluids, postoperative insulinresistance, patient comfort and postoperative outcome as well as its effect on the length of stay. PMID:22812052
The fast debris evolution model
NASA Astrophysics Data System (ADS)
Lewis, H. G.; Swinerd, G. G.; Newland, R. J.; Saunders, A.
2009-09-01
The 'particles-in-a-box' (PIB) model introduced by Talent [Talent, D.L. Analytic model for orbital debris environmental management. J. Spacecraft Rocket, 29 (4), 508-513, 1992.] removed the need for computer-intensive Monte Carlo simulation to predict the gross characteristics of an evolving debris environment. The PIB model was described using a differential equation that allows the stability of the low Earth orbit (LEO) environment to be tested by a straightforward analysis of the equation's coefficients. As part of an ongoing research effort to investigate more efficient approaches to evolutionary modelling and to develop a suite of educational tools, a new PIB model has been developed. The model, entitled Fast Debris Evolution (FADE), employs a first-order differential equation to describe the rate at which new objects ⩾10 cm are added and removed from the environment. Whilst Talent [Talent, D.L. Analytic model for orbital debris environmental management. J. Spacecraft Rocket, 29 (4), 508-513, 1992.] based the collision theory for the PIB approach on collisions between gas particles and adopted specific values for the parameters of the model from a number of references, the form and coefficients of the FADE model equations can be inferred from the outputs of future projections produced by high-fidelity models, such as the DAMAGE model. The FADE model has been implemented as a client-side, web-based service using JavaScript embedded within a HTML document. Due to the simple nature of the algorithm, FADE can deliver the results of future projections immediately in a graphical format, with complete user-control over key simulation parameters. Historical and future projections for the ⩾10 cm LEO debris environment under a variety of different scenarios are possible, including business as usual, no future launches, post-mission disposal and remediation. A selection of results is presented with comparisons with predictions made using the DAMAGE environment model
The Fast Debris Evolution Model
NASA Astrophysics Data System (ADS)
Lewis, Hugh G.; Swinerd, Graham; Newland, Rebecca; Saunders, Arrun
The ‘Particles-in-a-box' (PIB) model introduced by Talent (1992) removed the need for computerintensive Monte Carlo simulation to predict the gross characteristics of an evolving debris environment. The PIB model was described using a differential equation that allows the stability of the low Earth orbit (LEO) environment to be tested by a straightforward analysis of the equation's coefficients. As part of an ongoing research effort to investigate more efficient approaches to evolutionary modelling and to develop a suite of educational tools, a new PIB model has been developed. The model, entitled Fast Debris Evolution (FaDE), employs a first-order differential equation to describe the rate at which new objects (˜ 10 cm) are added and removed from the environment. Whilst Talent (1992) based the collision theory for the PIB approach on collisions between gas particles and adopted specific values for the parameters of the model from a number of references, the form and coefficients of the FaDE model equations can be inferred from the outputs of future projections produced by high-fidelity models, such as the DAMAGE model. The FaDE model has been implemented as a client-side, web-based service using Javascript embedded within a HTML document. Due to the simple nature of the algorithm, FaDE can deliver the results of future projections immediately in a graphical format, with complete user-control over key simulation parameters. Historical and future projections for the ˜ 10 cm low Earth orbit (LEO) debris environment under a variety of different scenarios are possible, including business as usual, no future launches, post-mission disposal and remediation. A selection of results is presented with comparisons with predictions made using the DAMAGE environment model. The results demonstrate that the FaDE model is able to capture comparable time-series of collisions and number of objects as predicted by DAMAGE in several scenarios. Further, and perhaps more importantly
Fast and thermal neutron radiography
NASA Astrophysics Data System (ADS)
Cremer, Jay T.; Piestrup, Melvin A.; Wu, Xizeng
2005-09-01
There is a need for high brightness neutron sources that are portable, relatively inexpensive, and capable of neutron radiography in short imaging times. Fast and thermal neutron radiography is as an excellent method to penetrate high-density, high-Z objects, thick objects and image its interior contents, especially hydrogen-based materials. In this paper we model the expected imaging performance characteristics and limitations of fast and thermal radiography systems employing a Rose Model based transfer analysis. For fast neutron detection plastic fiber array scintllators or liquid scintillator filled capillary arrays are employed for fast neutron detection, and 6Li doped ZnS(Cu) phosphors are employed for thermal neutron detection. These simulations can provide guidance in the design, construction, and testing of neutron imaging systems. In particular we determined for a range of slab thickness, the range of thicknesses of embedded cracks (air-filled or filled with material such as water) which can be detected and imaged.
Fast Access Data Acquisition System
Dr. Vladimir Katsman
1998-03-17
Our goal in this program is to develop Fast Access Data Acquisition System (FADAS) by combining the flexibility of Multilink's GaAs and InP electronics and electro-optics with an extremely high data rate for the efficient handling and transfer of collider experimental data. This novel solution is based on Multilink's and Los Alamos National Laboratory's (LANL) unique components and technologies for extremely fast data transfer, storage, and processing.
Fast-Tracking Colostomy Closures.
Nanavati, Aditya J; Prabhakar, Subramaniam
2015-12-01
There have been very few studies on applying fast-track principles to colostomy closures. We believe that outcome may be significantly improved with multimodal interventions in the peri-operative care of patients undergoing this procedure. A retrospective study was carried out comparing patients who had undergone colostomy closures by the fast-track and traditional care protocols at our centre. We intended to analyse peri-operative period and recovery in colostomy closures to confirm that fast-track surgery principles improved outcomes. Twenty-six patients in the fast-track arm and 24 patients in the traditional care arm had undergone colostomy closures. Both groups were comparable in terms of their baseline parameters. Patients in the fast-track group were ambulatory and accepted oral feeding earlier. There was a significant reduction in the duration of stay (4.73 ± 1.43 days vs. 7.21 ± 1.38 days, p = 0.0000). We did not observe a rise in complications or 30-day re-admissions. Fast-track surgery can safely be applied to colostomy closures. It shows earlier ambulation and reduction in length of hospital stay. PMID:27011527
Fast Hadamard Spectroscopic Imaging Techniques
NASA Astrophysics Data System (ADS)
Goelman, G.
1994-07-01
Fast Hadamard spectroscopic imaging (HSI) techniques are presented. These techniques combine transverse and longitudinal encoding to obtain multiple-volume localization. The fast techniques are optimized for nuclei with short T2 and long T1 relaxation times and are therefore suitable for in vivo31P spectroscopy. When volume coils are used in fast HSI techniques, the signal-to-noise ratio per unit time (SNRT) is equal to the SNRT in regular HSI techniques. When surface coils are used, fast HSI techniques give significant improvement of SNRT over conventional HSI. Several fast techniques which are different in total experimental time and pulse demands are presented. When the number of acquisitions in a single repetition time is not higher than two, fast HSI techniques can be used with surface coils and the B1 inhomogeneity does not affect the localization. Surface-coil experiments on phantoms and on human calf muscles in vivo are presented. In addition, it is shown that the localization obtained by the HSI techniques are independent of the repetition times.
DC Stromswold; AJ Peurrung; RR Hansen; PL Reeder
2000-01-18
Direct fast-neutron detection is the detection of fast neutrons before they are moderated to thermal energy. We have investigated two approaches for using proton-recoil in plastic scintillators to detect fast neutrons and distinguish them from gamma-ray interactions. Both approaches use the difference in travel speed between neutrons and gamma rays as the basis for separating the types of events. In the first method, we examined the pulses generated during scattering in a plastic scintillator to see if they provide a means for distinguishing fast-neutron events from gamma-ray events. The slower speed of neutrons compared to gamma rays results in the production of broader pulses when neutrons scatter several times within a plastic scintillator. In contrast, gamma-ray interactions should produce narrow pulses, even if multiple scattering takes place, because the time between successive scattering is small. Experiments using a fast scintillator confirmed the presence of broader pulses from neutrons than from gamma rays. However, the difference in pulse widths between neutrons and gamma rays using the best commercially available scintillators was not sufficiently large to provide a practical means for distinguishing fast neutrons and gamma rays on a pulse-by-pulse basis. A faster scintillator is needed, and that scintillator might become available in the literature. Results of the pulse-width studies were presented in a previous report (peurrung et al. 1998), and they are only summarized here.
IMPLEMENTATION REVIEW LETTERS, 2002
The following letters provide a summary of the Environmental Protection Agencys comments regarding 2002 Implementation Review of nineteen estuary programs in the National Estuary Program. Various strengths within the programs included use of implementation progress and tracking s...
Implementing Student Information Systems
ERIC Educational Resources Information Center
Sullivan, Laurie; Porter, Rebecca
2006-01-01
Implementing an enterprise resource planning system is a complex undertaking. Careful planning, management, communication, and staffing can make the difference between a successful and unsuccessful implementation. (Contains 3 tables.)
MATLAB tensor classes for fast algorithm prototyping.
Bader, Brett William; Kolda, Tamara Gibson
2004-10-01
Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.
Durandal: fast exact clustering of protein decoys.
Berenger, Francois; Shrestha, Rojan; Zhou, Yong; Simoncini, David; Zhang, Kam Y J
2012-02-01
In protein folding, clustering is commonly used as one way to identify the best decoy produced. Initializing the pairwise distance matrix for a large decoy set is computationally expensive. We have proposed a fast method that works even on large decoy sets. This method is implemented in a software called Durandal. Durandal has been shown to be consistently faster than other software performing fast exact clustering. In some cases, Durandal can even outperform the speed of an approximate method. Durandal uses the triangular inequality to accelerate exact clustering, without compromising the distance function. Recently, we have further enhanced the performance of Durandal by incorporating a Quaternion-based characteristic polynomial method that has increased the speed of Durandal between 13% and 27% compared with the previous version. Durandal source code is available under the GNU General Public License at http://www.riken.jp/zhangiru/software/durandal_released_qcp.tgz. Alternatively, a compiled version of Durandal is also distributed with the nightly builds of the Phenix (http://www.phenix-online.org/) crystallographic software suite (Adams et al., Acta Crystallogr Sect D 2010, 66, 213). PMID:22120171
Observing pulsars and fast transients with LOFAR
NASA Astrophysics Data System (ADS)
Stappers, B. W.; Hessels, J. W. T.; Alexov, A.; Anderson, K.; Coenen, T.; Hassall, T.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; van Leeuwen, J.; Mol, J. D.; Noutsos, A.; Romein, J. W.; Weltevrede, P.; Fender, R.; Wijers, R. A. M. J.; Bähren, L.; Bell, M. E.; Broderick, J.; Daw, E. J.; Dhillon, V. S.; Eislöffel, J.; Falcke, H.; Griessmeier, J.; Law, C.; Markoff, S.; Miller-Jones, J. C. A.; Scheers, B.; Spreeuw, H.; Swinbank, J.; Ter Veen, S.; Wise, M. W.; Wucknitz, O.; Zarka, P.; Anderson, J.; Asgekar, A.; Avruch, I. M.; Beck, R.; Bennema, P.; Bentum, M. J.; Best, P.; Bregman, J.; Brentjens, M.; van de Brink, R. H.; Broekema, P. C.; Brouw, W. N.; Brüggen, M.; de Bruyn, A. G.; Butcher, H. R.; Ciardi, B.; Conway, J.; Dettmar, R.-J.; van Duin, A.; van Enst, J.; Garrett, M.; Gerbers, M.; Grit, T.; Gunst, A.; van Haarlem, M. P.; Hamaker, J. P.; Heald, G.; Hoeft, M.; Holties, H.; Horneffer, A.; Koopmans, L. V. E.; Kuper, G.; Loose, M.; Maat, P.; McKay-Bukowski, D.; McKean, J. P.; Miley, G.; Morganti, R.; Nijboer, R.; Noordam, J. E.; Norden, M.; Olofsson, H.; Pandey-Pommier, M.; Polatidis, A.; Reich, W.; Röttgering, H.; Schoenmakers, A.; Sluman, J.; Smirnov, O.; Steinmetz, M.; Sterks, C. G. M.; Tagger, M.; Tang, Y.; Vermeulen, R.; Vermaas, N.; Vogt, C.; de Vos, M.; Wijnholds, S. J.; Yatawatta, S.; Zensus, A.
2011-06-01
Low frequency radio waves, while challenging to observe, are a rich source of information about pulsars. The LOw Frequency ARray (LOFAR) is a new radio interferometer operating in the lowest 4 octaves of the ionospheric "radio window": 10-240 MHz, that will greatly facilitate observing pulsars at low radio frequencies. Through the huge collecting area, long baselines, and flexible digital hardware, it is expected that LOFAR will revolutionize radio astronomy at the lowest frequencies visible from Earth. LOFAR is a next-generation radio telescope and a pathfinder to the Square Kilometre Array (SKA), in that it incorporates advanced multi-beaming techniques between thousands of individual elements. We discuss the motivation for low-frequency pulsar observations in general and the potential of LOFAR in addressing these science goals. We present LOFAR as it is designed to perform high-time-resolution observations of pulsars and other fast transients, and outline the various relevant observing modes and data reduction pipelines that are already or will soon be implemented to facilitate these observations. A number of results obtained from commissioning observations are presented to demonstrate the exciting potential of the telescope. This paper outlines the case for low frequency pulsar observations and is also intended to serve as a reference for upcoming pulsar/fast transient science papers with LOFAR.
Fast Multipole Methods for Particle Dynamics.
Kurzak, Jakub; Pettitt, Bernard M.
2006-08-30
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems.
Fast swept-volume distance for robust collision detection
Xavier, P.G.
1997-04-01
The need for collision detection arises in several robotics areas, including motion-planning, online collision avoidance, and simulation. At the heart of most current methods are algorithms for interference detection and/or distance computation. A few recent algorithms and implementations are very fast, but to use them for accurate collision detection, very small step sizes can be necessary, reducing their effective efficiency. We present a fast, implemented technique for doing exact distance computation and interference detection for translationally-swept bodies. For rotationally swept bodies, we adapt this technique to improve accuracy, for any given step size, in distance computation and interference detection. We present preliminary experiments that show that the combination of basic and swept-body calculations holds much promise for faster accurate collision detection.
Fast Hybrid Silicon Double-Quantum-Dot Qubit
NASA Astrophysics Data System (ADS)
Shi, Zhan; Simmons, C. B.; Prance, J. R.; Gamble, John King; Koh, Teck Seng; Shim, Yun-Pil; Hu, Xuedong; Savage, D. E.; Lagally, M. G.; Eriksson, M. A.; Friesen, Mark; Coppersmith, S. N.
2012-04-01
We propose a quantum dot qubit architecture that has an attractive combination of speed and fabrication simplicity. It consists of a double quantum dot with one electron in one dot and two electrons in the other. The qubit itself is a set of two states with total spin quantum numbers S2=3/4 (S=1/2) and Sz=-1/2, with the two different states being singlet and triplet in the doubly occupied dot. Gate operations can be implemented electrically and the qubit is highly tunable, enabling fast implementation of one- and two-qubit gates in a simpler geometry and with fewer operations than in other proposed quantum dot qubit architectures with fast operations. Moreover, the system has potentially long decoherence times. These are all extremely attractive properties for use in quantum information processing devices.
Implementation of polyatomic MCTDHF capability
NASA Astrophysics Data System (ADS)
Haxton, Daniel; Jones, Jeremiah; Rescigno, Thomas; McCurdy, C. William; Ibrahim, Khaled; Williams, Sam; Vecharynski, Eugene; Rouet, Francois-Henry; Li, Xiaoye; Yang, Chao
2015-05-01
The implementation of the Multiconfiguration Time-Dependent Hartree-Fock method for poly- atomic molecules using a cartesian product grid of sinc basis functions will be discussed. The focus will be on two key components of the method: first, the use of a resolution-of-the-identity approximation; sec- ond, the use of established techniques for triple Toeplitz matrix algebra using fast Fourier transform over distributed memory architectures (MPI 3D FFT). The scaling of two-electron matrix element transformations is converted from O(N4) to O(N log N) by including these components. Here N = n3, with n the number of points on a side. We test the prelim- inary implementation by calculating absorption spectra of small hydro- carbons, using approximately 16-512 points on a side. This work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under the Early Career program, and by the offices of BES and Advanced Scientific Computing Research, under the SciDAC program.
Implementation of global energy sustainability
Grob, G.R.
1998-02-01
The term energy sustainability emerged from the UN Conference on Environment and Development in Rio 1992, when Agenda 21 was formulated and the Global Energy Charter proclaimed. Emission reductions, total energy costing, improved energy efficiency, and sustainable energy systems are the four fundamental principles of the charter. These principles can be implemented in the proposed financial, legal, technical, and education framework. Much has been done in many countries toward the implementation of the Global Energy Charter, but progress has not been fast enough to ease the disastrous effects of the too many ill-conceived energy systems on the environment, climate, and health. Global warming is accelerating, and pollution is worsening, especially in developing countries with their hunger for energy to meet the needs of economic development. Asian cities are now beating all pollution records, and greenhouse gases are visibly changing the climate with rising sea levels, retracting glaciers, and record weather disasters. This article presents why and how energy investments and research money have to be rechanneled into sustainable energy, rather than into the business-as-usual of depleting, unsustainable energy concepts exceeding one trillion dollars per year. This largest of all investment sectors needs much more attention.
Fast sampling algorithm for Lie-Trotter products.
Predescu, Cristian
2005-04-01
A fast algorithm for path sampling in path-integral Monte Carlo simulations is proposed. The algorithm utilizes the Lévy-Ciesielski implementation of Lie-Trotter products to achieve a mathematically proven computational cost of n log2(n) with the number of time slices n, despite the fact that each path variable is updated separately, for reasons of optimality. In this respect, we demonstrate that updating a group of random variables simultaneously results in loss of efficiency. PMID:15903719
Fast feature identification for holographic tracking: the orientation alignment transform.
Krishnatreya, Bhaskar Jyoti; Grier, David G
2014-06-01
The concentric fringe patterns created by features in holograms may be associated with a complex-valued orientational order field. Convolution with an orientational alignment operator then identifies centers of symmetry that correspond to the two-dimensional positions of the features. Feature identification through orientational alignment is reminiscent of voting algorithms such as Hough transforms, but may be implemented with fast convolution methods, and so can be orders of magnitude faster. PMID:24921472
Fast Transform Decoding Of Nonsystematic Reed-Solomon Codes
NASA Technical Reports Server (NTRS)
Truong, Trieu-Kie; Cheung, Kar-Ming; Shiozaki, A.; Reed, Irving S.
1992-01-01
Fast, efficient Fermat number transform used to compute F'(x) analogous to computation of syndrome in conventional decoding scheme. Eliminates polynomial multiplications and reduces number of multiplications in reconstruction of F'(x) to n log (n). Euclidean algorithm used to evaluate F(x) directly, without going through intermediate steps of solving error-locator and error-evaluator polynomials. Algorithm suitable for implementation in very-large-scale integrated circuits.
FASTGAS: Fast Gas Sampling for palladium exchange tests
Malinowski, M.E.; Stewart, K.D.; VerBerkmoes, A.A.
1991-06-01
A mass spectrometric technique for measuring the composition of gas flows in rapid H/D exchange reactions in palladium compacts has been developed. This method, called FASTGAS (Fast Gas Sampling)'' has been used at atmospheric pressures and above with a time response of better than 100 ms. The current implementation of the FASTGAS technique is described in detail and examples of its application to palladium hydride exchange tests are given. 12 refs., 10 figs.
Fast Poisson, Fast Helmholtz and fast linear elastostatic solvers on rectangular parallelepipeds
Wiegmann, A.
1999-06-01
FFT-based fast Poisson and fast Helmholtz solvers on rectangular parallelepipeds for periodic boundary conditions in one-, two and three space dimensions can also be used to solve Dirichlet and Neumann boundary value problems. For non-zero boundary conditions, this is the special, grid-aligned case of jump corrections used in the Explicit Jump Immersed Interface method. Fast elastostatic solvers for periodic boundary conditions in two and three dimensions can also be based on the FFT. From the periodic solvers we derive fast solvers for the new 'normal' boundary conditions and essential boundary conditions on rectangular parallelepipeds. The periodic case allows a simple proof of existence and uniqueness of the solutions to the discretization of normal boundary conditions. Numerical examples demonstrate the efficiency of the fast elastostatic solvers for non-periodic boundary conditions. More importantly, the fast solvers on rectangular parallelepipeds can be used together with the Immersed Interface Method to solve problems on non-rectangular domains with general boundary conditions. Details of this are reported in the preprint The Explicit Jump Immersed Interface Method for 2D Linear Elastostatics by the author.
Automated measurement of fast mitochondrial transport in neurons
Miller, Kyle E.; Liu, Xin-An; Puthanveettil, Sathyanarayanan V.
2015-01-01
There is growing recognition that fast mitochondrial transport in neurons is disrupted in multiple neurological diseases and psychiatric disorders. However, a major constraint in identifying novel therapeutics based on mitochondrial transport is that the large-scale analysis of fast transport is time consuming. Here we describe methodologies for the automated analysis of fast mitochondrial transport from data acquired using a robotic microscope. We focused on addressing questions of measurement precision, speed, reliably, workflow ease, statistical processing, and presentation. We used optical flow and particle tracking algorithms, implemented in ImageJ, to measure mitochondrial movement in primary cultured cortical and hippocampal neurons. With it, we are able to generate complete descriptions of movement profiles in an automated fashion of hundreds of thousands of mitochondria with a processing time of approximately one hour. We describe the calibration of the parameters of the tracking algorithms and demonstrate that they are capable of measuring the fast transport of a single mitochondrion. We then show that the methods are capable of reliably measuring the inhibition of fast mitochondria transport induced by the disruption of microtubules with the drug nocodazole in both hippocampal and cortical neurons. This work lays the foundation for future large-scale screens designed to identify compounds that modulate mitochondrial motility. PMID:26578890
Automated measurement of fast mitochondrial transport in neurons.
Miller, Kyle E; Liu, Xin-An; Puthanveettil, Sathyanarayanan V
2015-01-01
There is growing recognition that fast mitochondrial transport in neurons is disrupted in multiple neurological diseases and psychiatric disorders. However, a major constraint in identifying novel therapeutics based on mitochondrial transport is that the large-scale analysis of fast transport is time consuming. Here we describe methodologies for the automated analysis of fast mitochondrial transport from data acquired using a robotic microscope. We focused on addressing questions of measurement precision, speed, reliably, workflow ease, statistical processing, and presentation. We used optical flow and particle tracking algorithms, implemented in ImageJ, to measure mitochondrial movement in primary cultured cortical and hippocampal neurons. With it, we are able to generate complete descriptions of movement profiles in an automated fashion of hundreds of thousands of mitochondria with a processing time of approximately one hour. We describe the calibration of the parameters of the tracking algorithms and demonstrate that they are capable of measuring the fast transport of a single mitochondrion. We then show that the methods are capable of reliably measuring the inhibition of fast mitochondria transport induced by the disruption of microtubules with the drug nocodazole in both hippocampal and cortical neurons. This work lays the foundation for future large-scale screens designed to identify compounds that modulate mitochondrial motility. PMID:26578890
Mechanism for fast radio bursts
NASA Astrophysics Data System (ADS)
Romero, G. E.; del Valle, M. V.; Vieyro, F. L.
2016-01-01
Fast radio bursts are mysterious transient sources likely located at cosmological distances. The derived brightness temperatures exceed by many orders of magnitude the self-absorption limit of incoherent synchrotron radiation, implying the operation of a coherent emission process. We propose a radiation mechanism for fast radio bursts where the emission arises from collisionless bremsstrahlung in strong plasma turbulence excited by relativistic electron beams. We discuss possible astrophysical scenarios in which this process might operate. The emitting region is a turbulent plasma hit by a relativistic jet, where Langmuir plasma waves produce a concentration of intense electrostatic soliton-like regions (cavitons). The resulting radiation is coherent and, under some physical conditions, can be polarized and have a power-law distribution in energy. We obtain radio luminosities in agreement with the inferred values for fast radio bursts. The time scale of the radio flare in some cases can be extremely fast, of the order of 1 0-3 s . The mechanism we present here can explain the main features of fast radio bursts and is plausible in different astrophysical sources, such as gamma-ray bursts and some active galactic nuclei.
Mumford and Shah functional: VLSI analysis and implementation.
Martina, Maurizio; Masera, Guido
2006-03-01
This paper describes the analysis of the Mumford and Shah functional from the implementation point of view. Our goal is to show results in terms of complexity for real-time applications, such as motion estimation based on segmentation techniques, of the Mumford and Shah functional. Moreover, the sensitivity to finite precision representation is addressed, a fast VLSI architecture is described, and results obtained for its complete implementation on a 0.13 microm standard cells technology are presented. PMID:16526435
Fast reactors and nuclear nonproliferation
Avrorin, E.N.; Rachkov, V.I.; Chebeskov, A.N.
2013-07-01
Problems are discussed with regard to nuclear fuel cycle resistance in fast reactors to nuclear proliferation risk due to the potential for use in military programs of the knowledge, technologies and materials gained from peaceful nuclear power applications. Advantages are addressed for fast reactors in the creation of a more reliable mode of nonproliferation in the closed nuclear fuel cycle in comparison with the existing fully open and partially closed fuel cycles of thermal reactors. Advantages and shortcomings are also discussed from the point of view of nonproliferation from the start with fast reactors using plutonium of thermal reactor spent fuel and enriched uranium fuel to the gradual transition using their own plutonium as fuel. (authors)
HI Intensity Mapping with FAST
NASA Astrophysics Data System (ADS)
Bigot-Sazy, M.-A.; Ma, Y.-Z.; Battye, R. A.; Browne, I. W. A.; Chen, T.; Dickinson, C.; Harper, S.; Maffei, B.; Olivari, L. C.; Wilkinsondagger, P. N.
2016-02-01
We discuss the detectability of large-scale HI intensity fluctuations using the FAST telescope. We present forecasts for the accuracy of measuring the Baryonic Acoustic Oscillations and constraining the properties of dark energy. The FAST 19-beam L-band receivers (1.05-1.45 GHz) can provide constraints on the matter power spectrum and dark energy equation of state parameters (w0,wa) that are comparable to the BINGO and CHIME experiments. For one year of integration time we find that the optimal survey area is 6000 deg2. However, observing with larger frequency coverage at higher redshift (0.95-1.35 GHz) improves the projected errorbars on the HI power spectrum by more than 2 σ confidence level. The combined constraints from FAST, CHIME, BINGO and Planck CMB observations can provide reliable, stringent constraints on the dark energy equation of state.
Future Assets, Student Talent (FAST)
NASA Technical Reports Server (NTRS)
1992-01-01
Future Assets, Student Talent (FAST) motivates and prepares talented students with disabilities to further their education and achieve High Tech and professional employment. The FAST program is managed by local professionals, business, and industry leaders; it is modeled after High School High Tech project TAKE CHARGE started in Los Angeles in 1983. Through cooperative efforts of Alabama Department of Education, Vocational Rehabilitation, Adult and Children Services, and the President's Committee on Employment of People with Disabilities, north central Alabama was chosen as the second site for a High School High Tech project. In 1986 local business, industry, education, government agencies, and rehabilitation representatives started FAST. The program objectives and goals, results and accomplishments, and survey results are included.
Fast computation of recurrences in long time series
NASA Astrophysics Data System (ADS)
Rawald, Tobias; Sips, Mike; Marwan, Norbert; Dransch, Doris
2014-05-01
The quadratic time complexity of calculating basic RQA measures, doubling the size of the input time series leads to a quadrupling in operations, impairs the fast computation of RQA in many application scenarios. As an example, we analyze the Potsdamer Reihe, an ongoing non-interrupted hourly temperature profile since 1893, consisting of 1,043,112 data points. Using an optimized single-threaded CPU implementation this analysis requires about six hours. Our approach conducts RQA for the Potsdamer Reihe in five minutes. We automatically split a long time series into smaller chunks (Divide) and distribute the computation of RQA measures across multiple GPU devices. To guarantee valid RQA results, we employ carryover buffers that allow sharing information between pairs of chunks (Recombine). We demonstrate the capabilities of our Divide and Recombine approach to process long time series by comparing the runtime of our implementation to existing RQA tools. We support a variety of platforms by employing the computing framework OpenCL. Our current implementation supports the computation of standard RQA measures (recurrence rate, determinism, laminarity, ratio, average diagonal line length, trapping time, longest diagonal line, longest vertical line, divergence, entropy, trend) and also calculates recurrence times. To utilize the potential of our approach for a number of applications, we plan to release our implementation under an Open Source software license. It will be available at http://www.gfz-potsdam.de/fast-rqa/. Since our approach allows to compute RQA measures for a long time series fast, we plan to extend our implementation to support multi-scale RQA.
MILAMIN 2 - Fast MATLAB FEM solver
NASA Astrophysics Data System (ADS)
Dabrowski, Marcin; Krotkiewski, Marcin; Schmid, Daniel W.
2013-04-01
MILAMIN is a free and efficient MATLAB-based two-dimensional FEM solver utilizing unstructured meshes [Dabrowski et al., G-cubed (2008)]. The code consists of steady-state thermal diffusion and incompressible Stokes flow solvers implemented in approximately 200 lines of native MATLAB code. The brevity makes the code easily customizable. An important quality of MILAMIN is speed - it can handle millions of nodes within minutes on one CPU core of a standard desktop computer, and is faster than many commercial solutions. The new MILAMIN 2 allows three-dimensional modeling. It is designed as a set of functional modules that can be used as building blocks for efficient FEM simulations using MATLAB. The utilities are largely implemented as native MATLAB functions. For performance critical parts we use MUTILS - a suite of compiled MEX functions optimized for shared memory multi-core computers. The most important features of MILAMIN 2 are: 1. Modular approach to defining, tracking, and discretizing the geometry of the model 2. Interfaces to external mesh generators (e.g., Triangle, Fade2d, T3D) and mesh utilities (e.g., element type conversion, fast point location, boundary extraction) 3. Efficient computation of the stiffness matrix for a wide range of element types, anisotropic materials and three-dimensional problems 4. Fast global matrix assembly using a dedicated MEX function 5. Automatic integration rules 6. Flexible prescription (spatial, temporal, and field functions) and efficient application of Dirichlet, Neuman, and periodic boundary conditions 7. Treatment of transient and non-linear problems 8. Various iterative and multi-level solution strategies 9. Post-processing tools (e.g., numerical integration) 10. Visualization primitives using MATLAB, and VTK export functions We provide a large number of examples that show how to implement a custom FEM solver using the MILAMIN 2 framework. The examples are MATLAB scripts of increasing complexity that address a given
The Sacramento Peak fast microphotometer
NASA Technical Reports Server (NTRS)
Arrambide, M. R.; Dunn, R. B.; Healy, A. W.; Porter, R.; Widener, A. L.; November, L. J.; Spence, G. E.
1984-01-01
The Sacramento Peak Observatory Fast Microphotometer translates an optical system that includes a laser and photodiode detector across the film to scan the Y direction. A stepping motor moves the film gate in the X direction. This arrangement affords high positional accuracy, low noise (0.002 RMS density units), modest speed (5000 points/second), large dynamic range (4.5 density units), high stability (0.005 density units), and low scattered light. The Fast Microphotometer is interfaced to the host computer by a 6502 microprocessor.
New AGS fast extraction system
Weng, W.T.
1980-09-01
Both the high energy physics program and ISA injection require an improved fast extraction system from the AGS. The proposed new system consists of a fast kicker at H5 and an ejector magnet at H10. The H5 kicker is capable of producing 1.2 mrad deflection and rising up to 99% strength in 150 nsec with flat top ripple within +- 1%. It is found that the focusing strengths and positions of UQ3-UQ7 have to be modified to achieve an achromatic condition at the end of 8/sup 0/-bend. Also, the conceptual design of the H5 magnet and the pulser system are discussed.
[Preoperative fasting guidelines: an update].
López Muñoz, A C; Busto Aguirreurreta, N; Tomás Braulio, J
2015-03-01
Anesthesiology societies have issued various guidelines on preoperative fasting since 1990, not only to decrease the incidence of lung aspiration and anesthetic morbidity, but also to increase patient comfort prior to anesthesia. Some of these societies have been updating their guidelines, as such that, since 2010, we now have 2 evidence-based preoperative fasting guidelines available. In this article, an attempt is made to review these updated guidelines, as well as the current instructions for more controversial patients such as infants, the obese, and a particular type of ophthalmic surgery. PMID:25443866
Fast generation of stereolithographic models.
Raic, K; Jansen, T; von Rymon-Lipinski, B; Tille, C; Seitz, H; Keeve, E
2002-01-01
In this paper we present a work-in-progress method for fast and efficient generation of stereolithographic models. The overall approach is embedded in our general software framework Julius, which runs on high-end-graphic systems as well as on low-level PCs. The design of the support structures needed for the stereolithographic process will allow semiautomatic generation of the model. We did produce support structures for stereolithographic models with this fast data processing pipeline and will show future perspectives in this paper. PMID:12451779
Campbell, J.A.
1984-01-01
This book provides a description of the use of Prolog - an increasingly important programming language. This commentary treats open questions in Prolog research, such as intelligent backtracking and the handling of infinite terms. Divided into five sections covering history, case studies and examples, questions of implementation, two formal models, or spectacles, and lastly, the future direction of Prolog and extensions to the language. Contents - Past and Present. Studies of Implementations. Open Questions of Prolog Implementation. Theoretical Frameworks for Present Implementations. Proposals for the Future.
Fast semivariogram computation using FPGA architectures
NASA Astrophysics Data System (ADS)
Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang
2015-02-01
The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments
Fast Atom Bombardment Mass Spectrometry.
ERIC Educational Resources Information Center
Rinehart, Kenneth L., Jr.
1982-01-01
Discusses reactions and characteristics of fast atom bombardment (FAB) mass spectroscopy in which samples are ionized in a condensed state by bombardment with xenon or argon atoms, yielding positive/negative secondary ions. Includes applications of FAB to structural problems and considers future developments using the technique. (Author/JN)
Fast Langmuir probe sweeping circuit
Milnes, K.A.; Ehlers, K.W.; Leung, K.N.; Owren, H.M.; Williams, M.D.
1980-06-01
An inexpensive, simple, and fast Langmuir probe sweeping circuit is presented. This sweeper completes a probe trace in 1.4 ms and has a maximum probe current capability of 5 A. It is suitable for pulsemode plasma operation with density greater than 10/sup 12/ ions/cm/sup 3/.
NASA Technical Reports Server (NTRS)
Patel, Umesh D.; DellaTorre, Edward; Day, John H. (Technical Monitor)
2001-01-01
A fast differential equation approach for the DOK model has been extented to the CMH model. Also, a cobweb technique for calculating the CMH model is also presented. The two techniques are contrasted from the point of view of flexibility and computation time.
NASA Technical Reports Server (NTRS)
Fogal, G. L.
1977-01-01
Wall structure keeps chambers at constant, uniform temperature, yet allows them to be cooled rapidly if necessary. Wall structure, used in fast-response cloud chamber, has surface heater and coolant shell separated by foam insulation. It is lightweight and requires relatively little power.
Fast Neutron Sensitivity with HPGe
Seifert, Allen; Hensley, Walter K.; Siciliano, Edward R.; Pitts, W. K.
2008-01-22
In addition to being excellent gamma-ray detectors, germanium detectors are also sensitive to fast neutrons. Incident neutrons undergo inelastic scattering {Ge(n,n')Ge*} off germanium nuclei and the resulting excited states emit gamma rays or conversion electrons. The response of a standard 140% high-purity germanium (HPGe) detector with a bismuth germanate (BGO) anti-coincidence shield was measured for several neutron sources to characterize the ability of the HPGe detector to detect fast neutrons. For a sensitivity calculation performed using the characteristic fast neutron response peak that occurs at 692 keV, the 140% germanium detector system exhibited a sensitivity of ~175 counts / kg of WGPu_{metal} in 1000 seconds at a source-detector distance of 1 meter with 4 in. of lead shielding between source and detector. Theoretical work also indicates that it might be possible to use the shape of the fast-neutron inelastic scattering signatures (specifically, the end-point energy of the long high energy tail of the resulting asymmetric peak) to gain additional information about the energy distribution of the incident neutron spectrum. However, the experimentally observed end-point energies appear to be almost identical for each of the fast neutron sources counted. Detailed MCNP calculations show that the neutron energy distributions impingent on the detector for these sources are very similar in this experimental configuration, due to neutron scattering in a lead shield (placed between the neutron source and HPGe detector to reduce the gamma ray flux), the BGO anti-coincidence detector, and the concrete floor.
ERIC Educational Resources Information Center
Brooks, D. Christopher
2014-01-01
While the use of analytics to promote student success is gaining in popularity, basic questions about what IPAS is and the issues institutions face during implementation and integration. The "IPAS Implementation Handbook" catalogs the experiences, observations, and practical advice from 19 institutions engaged in IPAS implementation…
The implementation of POSTGRES
NASA Technical Reports Server (NTRS)
Stonebraker, Michael; Rowe, Lawrence A.; Hirohama, Michael
1990-01-01
The design and implementation decisions made for the three-dimensional data manager POSTGRES are discussed. Attention is restricted to the DBMS backend functions. The POSTGRES data model and query language, the rules system, the storage system, the POSTGRES implementation, and the current status and performance are discussed.
Measuring Curriculum Implementation
ERIC Educational Resources Information Center
Huntley, Mary Ann
2009-01-01
Using curriculum-specific tools for measuring fidelity of implementation is an essential yet often overlooked aspect of examining relationships among textbooks, teaching, and student learning. This "Brief Report" describes the variety of ways that curriculum implementation is measured and argues that there is an urgent need to develop…
Implementing technology assessments
NASA Technical Reports Server (NTRS)
Kasper, R. G. (Editor); Logsdon, J. M. (Editor); Mottur, E. R. (Editor)
1975-01-01
Five case studies of specific technology assessments and the ways in which they influenced (or did not influence) the development of the assessed technology are discussed. Automotive air pollution and problems of implementing technology assessment are considered. The assessment-acceptance-implementation process is discussed in detail using the five case studies as examples.
Scott, D.M.
1996-12-31
Computational Pipeline Monitoring (a new API developed term) is still often called leak detection. It is a complex application of software technology. This paper considers a number of CPM issues that a pipeline operator should consider in implementing a CPM system. The contents are based upon the experience of interprovincial Pipe Line Company and other companies who have implemented CPM systems.
Environmental protection Implementation Plan
R. C. Holland
1999-12-01
This ``Environmental Protection Implementation Plan'' is intended to ensure that the environmental program objectives of Department of Energy Order 5400.1 are achieved at SNL/California. This document states SNL/California's commitment to conduct its operations in an environmentally safe and responsible manner. The ``Environmental Protection Implementation Plan'' helps management and staff comply with applicable environmental responsibilities.
NASA Astrophysics Data System (ADS)
Zhang, Sumei; Wang, Lihe
2013-07-01
This study proposes a pricing model through allowing for stochastic interest rate and stochastic volatility in the double exponential jump-diffusion setting. The characteristic function of the proposed model is then derived. Fast numerical solutions for European call and put options pricing based on characteristic function and fast Fourier transform (FFT) technique are developed. Simulations show that our numerical technique is accurate, fast and easy to implement, the proposed model is suitable for modeling long-time real-market changes. The model and the proposed option pricing method are useful for empirical analysis of asset returns and risk management in firms.
Fast and efficient QTL mapper for thousands of molecular phenotypes
Ongen, Halit; Buil, Alfonso; Brown, Andrew Anand; Dermitzakis, Emmanouil T.; Delaneau, Olivier
2016-01-01
Motivation: In order to discover quantitative trait loci, multi-dimensional genomic datasets combining DNA-seq and ChiP-/RNA-seq require methods that rapidly correlate tens of thousands of molecular phenotypes with millions of genetic variants while appropriately controlling for multiple testing. Results: We have developed FastQTL, a method that implements a popular cis-QTL mapping strategy in a user- and cluster-friendly tool. FastQTL also proposes an efficient permutation procedure to control for multiple testing. The outcome of permutations is modeled using beta distributions trained from a few permutations and from which adjusted P-values can be estimated at any level of significance with little computational cost. The Geuvadis & GTEx pilot datasets can be now easily analyzed an order of magnitude faster than previous approaches. Availability and implementation: Source code, binaries and comprehensive documentation of FastQTL are freely available to download at http://fastqtl.sourceforge.net/ Contact: emmanouil.dermitzakis@unige.ch or olivier.delaneau@unige.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26708335
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
Fast-track for fast times: catching and keeping generation Y in the nursing workforce.
Walker, Kim
2007-04-01
There is little doubt we find ourselves in challenging times as never before has there been such generational diversity in the nursing workforce. Currently, nurses from four distinct (and now well recognised and discussed) generational groups jostle for primacy of recognition and reward. Equally significant is the acute realisation that our ageing profession must find ways to sustain itself in the wake of huge attrition as the 'baby boomer' nurses start retiring over the next ten to fifteen years. These realities impel us to become ever more strategic in our thinking about how best to manage the workforce of the future. This paper presents two exciting and original innovations currently in train at one of Australia's leading Catholic health care providers: firstly, a new fast-track bachelor of nursing program for fee-paying domestic students. This is a collaborative venture between St Vincent's and Mater Health, Sydney (SV&MHS) and the University of Tasmania (UTas); as far as we know, it is unprecedented in Australia. As well, the two private facilities of SV&MHS, St Vincent's Private (SVPH) and the Mater Hospitals, have developed and implemented a unique 'accelerated progression pathway' (APP) to enable registered nurses with talent and ambition to fast track their career through a competency and merit based system of performance management and reward. Both these initiatives are aimed squarely at the gen Y demographic and provide potential to significantly augment our capacity to recruit and retain quality people well into the future. PMID:17563323
A fast meteor detection algorithm
NASA Astrophysics Data System (ADS)
Gural, P.
2016-01-01
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
NASA Technical Reports Server (NTRS)
Hauschildt, P. H.
1992-01-01
A fast method for the solution of the radiative transfer equation in rapidly moving spherical media, based on an approximate Lambda-operator iteration, is described. The method uses the short characteristic method and a tridiagonal approximate Lambda-operator to achieve fast convergence. The convergence properties and the CPU time requirements of the method are discussed for the test problem of a two-level atom with background continuum absorption and Thomson scattering. Details of the actual implementation for fast vector and parallel computers are given. The method is accurate and fast enough to be incorporated in radiation-hydrodynamic calculations.