Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic
NASA Technical Reports Server (NTRS)
Hjermstad, Chris
1986-01-01
Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.
Floating-Point Units and Algorithms for field-programmable gate arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, Keith D.; Hemmert, K. Scott
2005-11-01
The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less
Floating point arithmetic in future supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.
1989-01-01
Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.
A hardware-oriented algorithm for floating-point function generation
NASA Technical Reports Server (NTRS)
O'Grady, E. Pearse; Young, Baek-Kyu
1991-01-01
An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong
2007-01-01
Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
A High-Level Formalization of Floating-Point Number in PVS
NASA Technical Reports Server (NTRS)
Boldo, Sylvie; Munoz, Cesar
2006-01-01
We develop a formalization of floating-point numbers in PVS based on a well-known formalization in Coq. We first describe the definitions of all the needed notions, e.g., floating-point number, format, rounding modes, etc.; then, we present an application to polynomial evaluation for elementary function evaluation. The application already existed in Coq, but our formalization shows a clear improvement in the quality of the result due to the automation provided by PVS. We finally integrate our formalization into a PVS hardware-level formalization of the IEEE-854 standard previously developed at NASA.
Hardware math for the 6502 microprocessor
NASA Technical Reports Server (NTRS)
Kissel, R.; Currie, J.
1985-01-01
A floating-point arithmetic unit is described which is being used in the Ground Facility of Large Space Structures Control Verification (GF/LSSCV). The experiment uses two complete inertial measurement units and a set of three gimbal torquers in a closed loop to control the structural vibrations in a flexible test article (beam). A 6502 (8-bit) microprocessor controls four AMD 9511A floating-point arithmetic units to do all the computation in 20 milliseconds.
DSS 13 Microprocessor Antenna Controller
NASA Technical Reports Server (NTRS)
Gosline, R. M.
1984-01-01
A microprocessor based antenna controller system developed as part of the unattended station project for DSS 13 is described. Both the hardware and software top level designs are presented and the major problems encounted are discussed. Developments useful to related projects include a JPL standard 15 line interface using a single board computer, a general purpose parser, a fast floating point to ASCII conversion technique, and experience gained in using off board floating point processors with the 8080 CPU.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
An embedded controller for a 7-degree of freedom prosthetic arm.
Tenore, Francesco; Armiger, Robert S; Vogelstein, R Jacob; Wenstrand, Douglas S; Harshbarger, Stuart D; Englehart, Kevin
2008-01-01
We present results from an embedded real-time hardware system capable of decoding surface myoelectric signals (sMES) to control a seven degree of freedom upper limb prosthesis. This is one of the first hardware implementations of sMES decoding algorithms and the most advanced controller to-date. We compare decoding results from the device to simulation results from a real-time PC-based operating system. Performance of both systems is shown to be similar, with decoding accuracy greater than 90% for the floating point software simulation and 80% for fixed point hardware and software implementations.
Design of permanent magnet synchronous motor speed control system based on SVPWM
NASA Astrophysics Data System (ADS)
Wu, Haibo
2017-04-01
The control system is designed to realize TMS320F28335 based on the permanent magnet synchronous motor speed control system, and put it to quoting all electric of injection molding machine. The system of the control method used SVPWM, through the sampling motor current and rotating transformer position information, realize speed, current double closed loop control. Through the TMS320F28335 hardware floating-point processing core, realize the application for permanent magnet synchronous motor in the floating point arithmetic, to replace the past fixed-point algorithm, and improve the efficiency of the code.
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-01-01
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-12-15
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
Hardware-Independent Proofs of Numerical Programs
NASA Technical Reports Server (NTRS)
Boldo, Sylvie; Nguyen, Thi Minh Tuyen
2010-01-01
On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James W.
This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less
Measuring FLOPS Using Hardware Performance Counter Technologies on LC systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, D H
2008-09-05
FLOPS (FLoating-point Operations Per Second) is a commonly used performance metric for scientific programs that rely heavily on floating-point (FP) calculations. The metric is based on the number of FP operations rather than instructions, thereby facilitating a fair comparison between different machines. A well-known use of this metric is the LINPACK benchmark that is used to generate the Top500 list. It measures how fast a computer solves a dense N by N system of linear equations Ax=b, which requires a known number of FP operations, and reports the result in millions of FP operations per second (MFLOPS). While running amore » benchmark with known FP workloads can provide insightful information about the efficiency of a machine's FP pipelines in relation to other machines, measuring FLOPS of an arbitrary scientific application in a platform-independent manner is nontrivial. The goal of this paper is twofold. First, we explore the FP microarchitectures of key processors that are underpinning the LC machines. Second, we present the hardware performance monitoring counter-based measurement techniques that a user can use to get the native FLOPS of his or her program, which are practical solutions readily available on LC platforms. By nature, however, these native FLOPS metrics are not directly comparable across different machines mainly because FP operations are not consistent across microarchitectures. Thus, the first goal of this paper represents the base reference by which a user can interpret the measured FLOPS more judiciously.« less
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
Arnold, Jeffrey
2018-05-14
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
CT image reconstruction with half precision floating-point values.
Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc
2011-07-01
Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.
40 CFR 63.1063 - Floating roof requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the point of refloating the floating roof shall be continuous and shall be performed as soon as... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Floating roof requirements. 63.1063...) National Emission Standards for Storage Vessels (Tanks)-Control Level 2 § 63.1063 Floating roof...
NASA Astrophysics Data System (ADS)
Zinke, Stephan
2017-02-01
Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.
Numerical Integration with Graphical Processing Unit for QKD Simulation
2014-03-27
Windows system application programming interface (API) timer. The problem sizes studied produce speedups greater than 60x on the NVIDIA Tesla C2075...13 2.3.3 CUDA API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.4 CUDA and NVIDIA GPU Hardware...Theoretical Floating-Point Operations per Second for Intel CPUs and NVIDIA GPUs [3
Software feedback for monochromator tuning at UNICAT (abstract)
NASA Astrophysics Data System (ADS)
Jemian, Pete R.
2002-03-01
Automatic tuning of double-crystal monochromators presents an interesting challenge in software. The goal is to either maximize, or hold constant, the throughput of the monochromator. An additional goal of the software feedback is to disable itself when there is no beam and then, at the user's discretion, re-enable itself when the beam returns. These and other routine goals, such as adherence to limits of travel for positioners, are maintained by software controls. Many solutions exist to lock in and maintain a fixed throughput. Among these include a hardware solution involving a wave form generator, and a lock-in amplifier to autocorrelate the movement of a piezoelectric transducer (PZT) providing fine adjustment of the second crystal Bragg angle. This solution does not work when the positioner is a slow acting device such as a stepping motor. Proportional integral differential (PID) loops have been used to provide feedback through software but additional controls must be provided to maximize the monochromator throughput. Presented here is a software variation of the PID loop which meets the above goals. By using two floating point variables as inputs, representing the intensity of x rays measured before and after the monochromator, it attempts to maximize (or hold constant) the ratio of these two inputs by adjusting an output floating point variable. These floating point variables are connected to hardware channels corresponding to detectors and positioners. When the inputs go out of range, the software will stop making adjustments to the control output. Not limited to monochromator feedback, the software could be used, with beam steering positioners, to maintain a measure of beam position. Advantages of this software feedback are the flexibility of its various components. It has been used with stepping motors and PZTs as positioners. Various devices such as ion chambers, scintillation counters, photodiodes, and photoelectron collectors have been used as detectors. The software provides significant cost savings over hardware feedback methods. Presently implemented in EPICS, the software is sufficiently general to any automated instrument control system.
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
20-GFLOPS QR processor on a Xilinx Virtex-E FPGA
NASA Astrophysics Data System (ADS)
Walke, Richard L.; Smith, Robert W. M.; Lightbody, Gaye
2000-11-01
Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.
Implementing direct, spatially isolated problems on transputer networks
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
Parametric studies were performed on transputer networks of up to 40 processors to determine how to implement and maximize the performance of the solution of problems where no processor-to-processor data transfer is required for the problem solution (spatially isolated). Two types of problems are investigated a computationally intensive problem where the solution required the transmission of 160 bytes of data through the parallel network, and a communication intensive example that required the transmission of 3 Mbytes of data through the network. This data consists of solutions being sent back to the host processor and not intermediate results for another processor to work on. Studies were performed on both integer and floating-point transputers. The latter features an on-chip floating-point math unit and offers approximately an order of magnitude performance increase over the integer transputer on real valued computations. The results indicate that a minimum amount of work is required on each node per communication to achieve high network speedups (efficiencies). The floating-point processor requires approximately an order of magnitude more work per communication than the integer processor because of the floating-point unit's increased computing capacity.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-02-12
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.
Design of a reversible single precision floating point subtractor.
Anantha Lakshmi, Av; Sudha, Gf
2014-01-04
In recent years, Reversible logic has emerged as a major area of research due to its ability to reduce the power dissipation which is the main requirement in the low power digital circuit design. It has wide applications like low power CMOS design, Nano-technology, Digital signal processing, Communication, DNA computing and Optical computing. Floating-point operations are needed very frequently in nearly all computing disciplines, and studies have shown floating-point addition/subtraction to be the most used floating-point operation. However, few designs exist on efficient reversible BCD subtractors but no work on reversible floating point subtractor. In this paper, it is proposed to present an efficient reversible single precision floating-point subtractor. The proposed design requires reversible designs of an 8-bit and a 24-bit comparator unit, an 8-bit and a 24-bit subtractor, and a normalization unit. For normalization, a 24-bit Reversible Leading Zero Detector and a 24-bit reversible shift register is implemented to shift the mantissas. To realize a reversible 1-bit comparator, in this paper, two new 3x3 reversible gates are proposed The proposed reversible 1-bit comparator is better and optimized in terms of the number of reversible gates used, the number of transistor count and the number of garbage outputs. The proposed work is analysed in terms of number of reversible gates, garbage outputs, constant inputs and quantum costs. Using these modules, an efficient design of a reversible single precision floating point subtractor is proposed. Proposed circuits have been simulated using Modelsim and synthesized using Xilinx Virtex5vlx30tff665-3. The total on-chip power consumed by the proposed 32-bit reversible floating point subtractor is 0.410 W.
Potential of minicomputer/array-processor system for nonlinear finite-element analysis
NASA Technical Reports Server (NTRS)
Strohkorb, G. A.; Noor, A. K.
1983-01-01
The potential of using a minicomputer/array-processor system for the efficient solution of large-scale, nonlinear, finite-element problems is studied. A Prime 750 is used as the host computer, and a software simulator residing on the Prime is employed to assess the performance of the Floating Point Systems AP-120B array processor. Major hardware characteristics of the system such as virtual memory and parallel and pipeline processing are reviewed, and the interplay between various hardware components is examined. Effective use of the minicomputer/array-processor system for nonlinear analysis requires the following: (1) proper selection of the computational procedure and the capability to vectorize the numerical algorithms; (2) reduction of input-output operations; and (3) overlapping host and array-processor operations. A detailed discussion is given of techniques to accomplish each of these tasks. Two benchmark problems with 1715 and 3230 degrees of freedom, respectively, are selected to measure the anticipated gain in speed obtained by using the proposed algorithms on the array processor.
Neighbour lists for smoothed particle hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang
2018-04-01
The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
NASA Astrophysics Data System (ADS)
Morrison, R. E.; Robinson, S. H.
A continuous wave Doppler radar system has been designed which is portable, easily deployed, and remotely controlled. The heart of this system is a DSP/control board using Analog Devices ADSP-21020 40-bit floating point digital signal processor (DSP) microprocessor. Two 18-bit audio A/D converters provide digital input to the DSP/controller board for near real time target detection. Program memory for the DSP is dual ported with an Intel 87C51 microcontroller allowing DSP code to be up-loaded or down-loaded from a central controlling computer. The 87C51 provides overall system control for the remote radar and includes a time-of-day/day-of-year real time clock, system identification (ID) switches, and input/output (I/O) expansion by an Intel 82C55 I/O expander.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
DSP Implementation of the Retinex Image Enhancement Algorithm
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2004-01-01
The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.
NASA Astrophysics Data System (ADS)
Ould Bachir, Tarek
The real-time simulation of electrical networks gained a vivid industrial interest during recent years, motivated by the substantial development cost reduction that such a prototyping approach can offer. Real-time simulation allows the progressive inclusion of real hardware during its development, allowing its testing under realistic conditions. However, CPU-based simulations suffer from certain limitations such as the difficulty to reach time-steps of a few microsecond, an important challenge brought by modern power converters. Hence, industrial practitioners adopted the FPGA as a platform of choice for the implementation of calculation engines dedicated to the rapid real-time simulation of electrical networks. The reconfigurable technology broke the 5 kHz switching frequency barrier that is characteristic of CPU-based simulations. Moreover, FPGA-based real-time simulation offers many advantages, including the reduced latency of the simulation loop that is obtained thanks to a direct access to sensors and actuators. The fixed-point format is paradigmatic to FPGA-based digital signal processing. However, the format imposes a time penalty in the development process since the designer has to asses the required precision for all model variables. This fact brought an import research effort on the use of the floating-point format for the simulation of electrical networks. One of the main challenges in the use of the floating-point format are the long latencies required by the elementary arithmetic operators, particularly when an adder is used as an accumulator, an important building bloc for the implementation of integration rules such as the trapezoidal method. Hence, single-cycle floating-point accumulation forms the core of this research work. Our results help building such operators as accumulators, multiply-accumulators (MACs), and dot-product (DP) operators. These operators play a key role in the implementation of the proposed calculation engines. Therefore, this thesis contributes to the realm of FPGA-based real-time simulation in many ways. The research work proposes a new summation algorithm, which is a generalization of the so-called self-alignment technique. The new formulation is broader, simpler in its expression and hardware implementation. Our research helps formulating criteria to guarantee good accuracy, the criteria being established on a theoretical, as well as empirical basis. Moreover, the thesis offers a comprehensive analysis on the use of the redundant high radix carry-save (HRCS) format. The HRCS format is used to perform rapid additions of large mantissas. Two new HRCS operators are also proposed, namely an endomorphic adder and a HRCS to conventional converter. Once the mean to single-cycle accumulation is defined as a combination of the self-alignment technique and the HRCS format, the research focuses on the FPGA implementation of SIMD calculation engines using parallel floating-point MACs or DPs. The proposed operators are characterized by low latencies, allowing the engines to reach very low time-steps. The document finally discusses power electronic circuits modelling, and concludes with the presentation of a versatile calculation engine capable of simulating power converter with arbitrary topologies and up to 24 switches, while achieving time steps below 1 mus and allowing switching frequencies in the range of tens kilohertz. The latter realization has led to commercialization of a product by our industrial partner.
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Finkel, Hal; Yoshii, Kazutomo
Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLSmore » compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.« less
Theorem Proving in Intel Hardware Design
NASA Technical Reports Server (NTRS)
O'Leary, John
2009-01-01
For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
Real-time orthorectification by FPGA-based hardware acceleration
NASA Astrophysics Data System (ADS)
Kuo, David; Gordon, Don
2010-10-01
Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive value in the long processing cycle. However, the computation on each pixel can be reduced substantially by using computational results of the neighboring pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture. The optimal partition between software and hardware, the timing profile among image I/O and computation, and the highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for their orthorectified imagery production will be presented. The potential candidacy of the image processing computation that can be accelerated more efficiently by the same approach will also be analyzed.
Support for Diagnosis of Custom Computer Hardware
NASA Technical Reports Server (NTRS)
Molock, Dwaine S.
2008-01-01
The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations
NASA Astrophysics Data System (ADS)
Orf, L.
2017-12-01
In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.
Development of a 32-bit UNIX-based ELAS workstation
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Pearson, Ronnie W.; Cheng, Thomas D.
1987-01-01
A mini/microcomputer UNIX-based image analysis workstation has been designed and is being implemented to use the Earth Resources Laboratory Applications Software (ELAS). The hardware system includes a MASSCOMP 5600 computer, which is a 32-bit UNIX-based system (compatible with AT&T System V and Berkeley 4.2 BSD operating system), a floating point accelerator, a 474-megabyte fixed disk, a tri-density magnetic tape drive, and an 1152 by 910 by 12-plane color graphics/image interface. The software conversion includes reconfiguring the ELAs driver Master Task, recompiling and then testing the converted application modules. This hardware and software configuration is a self-sufficient image analysis workstation which can be used as a stand-alone system, or networked with other compatible workstations.
International Conference on Stiff Computation Held at Park City, Utah on April 12, 13 and 14, 1982.
1983-05-31
algorithm should be designed which can analyse a system description and find out for the user ~to which class of problems his system belongs... Dove...processors designed to implement aspecific solution process. yrne: IEE floating point chip design " used by INE and others is an example (Xahan)...the...hardware speciaList has designed his computer such that the paraL#L features can be addressed convenientLy and !! ’) efficientLy, and 4;) the software
RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade
2014-09-30
Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the
Atmospheric Modeling And Sensor Simulation (AMASS) study
NASA Technical Reports Server (NTRS)
Parker, K. G.
1985-01-01
A 4800 band synchronous communications link was established between the Perkin-Elmer (P-E) 3250 Atmospheric Modeling and Sensor Simulation (AMASS) system and the Cyber 205 located at the Goddard Space Flight Center. An extension study of off-the-shelf array processors offering standard interface to the Perkin-Elmer was conducted to determine which would meet computational requirements of the division. A Floating Point Systems AP-120B was borrowed from another Marshall Space Flight Center laboratory for evaluation. It was determined that available array processors did not offer significantly more capabilities than the borrowed unit, although at least three other vendors indicated that standard Perkin-Elmer interfaces would be marketed in the future. Therefore, the recommendation was made to continue to utilize the 120B ad to keep monitoring the AP market. Hardware necessary to support requirements of the ASD as well as to enhance system performance was specified and procured. Filters were implemented on the Harris/McIDAS system including two-dimensional lowpass, gradient, Laplacian, and bicubic interpolation routines.
A Mathematical Approach for Compiling and Optimizing Hardware Implementations of DSP Transforms
2010-08-01
FPGA throughput [billion samples per second] performance [ Gflop /s] 0 30 60 90 120 150 0 1 2 3 4 5 0 5,000 10,000 15,000 20,000 25,000...30,000 35,000 40,000 45,000 area [slices] DFT 64 (floating point) on Xilinx Virtex-6 FPGA throughput [billion samples per second] performance [ Gflop ...Virtex-6 FPGA throughput [billion samples per second] performance [ Gflop /s] 0 50 100 150 200 250 0 1 2 3 4 5 0 10,000 20,000 30,000 40,000
Program Converts VAX Floating-Point Data To UNIX
NASA Technical Reports Server (NTRS)
Alves, Marcos; Chapman, Bruce; Chu, Eugene
1996-01-01
VAX Floating Point to Host Floating Point Conversion (VAXFC) software converts non-ASCII files to unformatted floating-point representation of UNIX machine. This is done by reading bytes bit by bit, converting them to floating-point numbers, then writing results to another file. Useful when data files created by VAX computer must be used on other machines. Written in C language.
The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.
Learning to assign binary weights to binary descriptor
NASA Astrophysics Data System (ADS)
Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun
2016-10-01
Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.
Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo
2016-08-31
Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.
40 CFR 63.685 - Standards: Tanks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in paragraph (c)(2)(i) of this section when a tank is used as an interim transfer point to transfer... fixed-roof tank equipped with an internal floating roof in accordance with the requirements specified in paragraph (e) of this section; (2) A tank equipped with an external floating roof in accordance with the...
Fpga based L-band pulse doppler radar design and implementation
NASA Astrophysics Data System (ADS)
Savci, Kubilay
As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed point arithmetic operations as it is fast and facilitates source requirement as it consumes less hardware than floating point arithmetic operations. The software uses floating point arithmetic operations, which ensure precision in processing at the expense of speed. The functionality of the radar system has been tested for experimental validation in the field with a moving car and the validation of submodules are tested with synthetic data simulated on MATLAB.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... changed so that the restricted area could be marked with a signed floating buoy line instead of a signed floating barrier. That change has been made to the final rule. Procedural Requirements a. Review Under...; thence easterly along the Marinette Marine Corporation pier to the point of origin. The restricted area...
Multi-input and binary reproducible, high bandwidth floating point adder in a collective network
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard
2016-11-15
To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.
Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies
NASA Astrophysics Data System (ADS)
Vishnoi, U.; Noll, T. G.
2012-09-01
The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz-1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.
NASA Technical Reports Server (NTRS)
Jones, W. C.
1973-01-01
The space shuttle solid rocket boosters (SRB's) will be jettisoned to impact in the ocean within a 200-mile radius of the launch site. Tests were conducted at Long Beach, California, using a 12-inch diameter Titan 3C model to simulate the full-scale characteristics of the prototype SRB during retrieval operations. The objectives of the towing tests were to investigate and assess the following: (1) a floating and towing characteristics of the SRB; (2) need for plugging the SRB nozzle prior to tow; (3) attach point locations on the SRB; (4) effects of varying the SRB configuration; (5) towing hardware; and (6) difficulty of attaching a tow line to the SRB in the open sea. The model was towed in various sea states using four different types and varying lengths of tow line at various speeds. Three attach point locations were tested. Test data was recorded on magnetic tape for the tow line loads and for model pitch, roll, and yaw characteristics and was reduced by computer to tabular printouts and X-Y plots. Profile and movie photography provided documentary test data.
Multi-input and binary reproducible, high bandwidth floating point adder in a collective network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A; Heidelberger, Philip
To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less
76 FR 71322 - Taking and Importing Marine Mammals; U.S. Navy Training in the Hawaii Range Complex
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-17
..., most operationally sound method of initiating a demolition charge on a floating mine or mine at depth...; require building/ deploying an improvised, bulky, floating system for the receiver; and add another 180 ft... charge initiating device are taken to the detonation point. Military forms of C-4 are used as the...
Gigaflop architecture, a hardware perspective
NASA Technical Reports Server (NTRS)
Feierbach, G. F.
1978-01-01
Any super computer built in the early 1980s will use components that are available by fall 1978. The architecture of such a system cannot depart radically from current super computers if the software experience painfully acquired from these computers in the 70's is to apply. Given the above constraints, 10 billion floating point operations per second (BFLOPS) are attainable and a problem memory of 512 million (64 bit) words could be supported by the technology of the time. In contrast to this, industry is likely to respond with commercially available machines with a performance of less than 150 MFLOPS. This is due to self-imposed constraints on the manufacturers to provide upward compatible architectures (same instruction set) and systems which can be sold in significant volumes. Since this computing speed is inadequate to meet the demands of computational fluid dynamics, a special processor is required. Issues which are felt to be significant in the pursuit of maximum compute capability in this special processor are discussed.
Evaluation of floating-point sum or difference of products in carry-save domain
NASA Technical Reports Server (NTRS)
Wahab, A.; Erdogan, S.; Premkumar, A. B.
1992-01-01
An architecture to evaluate a 24-bit floating-point sum or difference of products using modified sequential carry-save multipliers with extensive pipelining is described. The basic building block of the architecture is a carry-save multiplier with built-in mantissa alignment for the summation during the multiplication cycles. A carry-save adder, capable of mantissa alignment, correctly positions products with the current carry-save sum. Carry propagation in individual multipliers is avoided and is only required once to produce the final result.
Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning
DeWitt, Don; Johnson-Williams, Nathan G.; Miyaoka, Robert S.; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K.; Hauck, Scott
2010-01-01
We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 106 floating-point operations per event for an exhaustive search to < 5 × 103 integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135
30 CFR 250.428 - What must I do in certain cementing and casing situations?
Code of Federal Regulations, 2010 CFR
2010-07-01
... point. (h) Need to use less than required cement for the surface casing during floating drilling... permafrost zone uncemented Fill the annulus with a liquid that has a freezing point below the minimum...
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
Identification of mothball powder composition by float tests and melting point tests.
Tang, Ka Yuen
2018-07-01
The aim of the study was to identify the composition, as either camphor, naphthalene, or paradichlorobenzene, of mothballs in the form of powder or tiny fragments by float tests and melting point tests. Naphthalene, paradichlorobenzene and camphor mothballs were blended into powder and tiny fragments (with sizes <1/10 of the size of an intact mothball). In the float tests, the mothball powder and tiny fragments were placed in water, saturated salt solution and 50% dextrose solution (D50), and the extent to which they floated or sank in the liquids was observed. In the melting point tests, the mothball powder and tiny fragments were placed in hot water with a temperature between 53 and 80 °C, and the extent to which they melted was observed. Both the float and melting point tests were then repeated using intact mothballs. Three emergency physicians blinded to the identities of samples and solutions visually evaluated each sample. In the float tests, paradichlorobenzene powder partially floated and partially sank in all three liquids, while naphthalene powder partially floated and partially sank in water. Naphthalene powder did not sink in D50 or saturated salt solution. Camphor powder floated in all three liquids. Float tests identified the compositions of intact mothball accurately. In the melting point tests, paradichlorobenzene powder melted completely in hot water within 1 min while naphthalene powder and camphor powder did not melt. The melted portions of paradichlorobenzene mothballs were sometimes too small to be observed in 1 min but the mothballs either partially or completely melted in 5 min. Both camphor and naphthalene intact mothballs did not melt in hot water. For mothball powder, the melting point tests were more accurate than the float tests in differentiating between paradichlorobenzene and non-paradichlorobenzene (naphthalene or camphor). For intact mothballs, float tests performed better than melting point tests. Float tests can identify camphor mothballs but melting point tests cannot. We suggest melting point tests for identifying mothball powder and tiny fragments while float tests are recommended for intact mothball and large fragments.
Gschwind, Michael K [Chappaqua, NY
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
Improvements in floating point addition/subtraction operations
Farmwald, P.M.
1984-02-24
Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
Bifurcated method and apparatus for floating point addition with decreased latency time
Farmwald, Paul M.
1987-01-01
Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Design of crossed-mirror array to form floating 3D LED signs
NASA Astrophysics Data System (ADS)
Yamamoto, Hirotsugu; Bando, Hiroki; Kujime, Ryousuke; Suyama, Shiro
2012-03-01
3D representation of digital signage improves its significance and rapid notification of important points. Our goal is to realize floating 3D LED signs. The problem is there is no sufficient device to form floating 3D images from LEDs. LED lamp size is around 1 cm including wiring and substrates. Such large pitch increases display size and sometimes spoils image quality. The purpose of this paper is to develop optical device to meet the three requirements and to demonstrate floating 3D arrays of LEDs. We analytically investigate image formation by a crossed mirror structure with aerial aperture, called CMA (crossed-mirror array). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. We have fabricated CMA for 3D array of LEDs. One CMA unit contains 20 x 20 apertures that are located diagonally. Floating image of LEDs was formed in wide range of incident angle. The image size of focused beam agreed to the apparent aperture size. When LEDs were located three-dimensionally (LEDs in three depths), the focused distances were the same as the distance between the real LED and the CMA.
NASA Astrophysics Data System (ADS)
Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.
2008-04-01
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Nakamura, N; Nakano, K; Sugiura, N; Matsumura, M
2003-12-01
A process using a floating carrier for immobilization of cyanobacteriolytic bacteria, B.cereus N-14, was proposed to realize an effective in situ control of natural floating cyanobacterial blooms. The critical concentrations of the cyanobacteriolytic substance and B.cereus N-14 cells required to exhibit cyanobacteriolytic activity were investigated. The results indicated the necessity of cell growth to produce sufficiently high amounts of the cyanobacteriolytic substance to exhibit its activity and also for conditions enabling good contact between high concentrations of the cyanobacteriolytic substance and cyanobacteria. Floating biodegradable plastics made of starch were applied as a carrier material to maintain close contact between the immobilized cyanobacteriolytic bacteria and floating cyanobacteria. The floating starch-carriers could eliminate 99% of floating cyanobacteria in 4 d. Since B.cereus N-14 could produce the cyanobacteriolytic substance under the presence of starch and some amino acids, the cyanobacteriolytic activity could be attributed to carbon source fed from starch carrier and amino acids eluted from lysed cyanobacteria. Therefore, the effect of using a floating starch-carrier was confirmed from both view points as a carrier for immobilization and a nutrient source to stimulate cyanobacteriolytic activity. The new concept to apply a floating carrier immobilizing useful microorganisms for intensive treatment of a nuisance floating target was demonstrated.
A floating-point digital receiver for MRI.
Hoenninger, John C; Crooks, Lawrence E; Arakawa, Mitsuaki
2002-07-01
A magnetic resonance imaging (MRI) system requires the highest possible signal fidelity and stability for clinical applications. Quadrature analog receivers have problems with channel matching, dc offset and analog-to-digital linearity. Fixed-point digital receivers (DRs) reduce all of these problems. We have demonstrated that a floating-point DR using large (order 124 to 512) FIR low-pass filters also overcomes these problems, automatically provides long word length and has low latency between signals. A preloaded table of finite impuls response (FIR) filter coefficients provides fast switching between one of 129 different one-stage and two-stage multrate FIR low-pass filters with bandwidths between 4 KHz and 125 KHz. This design has been implemented on a dual channel circuit board for a commercial MRI system.
High-performance floating-point image computing workstation for medical applications
NASA Astrophysics Data System (ADS)
Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin
1990-07-01
The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.
33 CFR 161.18 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... call. H HOTEL Date, time and point of entry system Entry time expressed as in (B) and into the entry... KILO Date, time and point of exit from system Exit time expressed as in (B) and exit position expressed....; for a dredge or floating plant: configuration of pipeline, mooring configuration, number of assist...
Defining the IEEE-854 floating-point standard in PVS
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
A significant portion of the ANSI/IEEE-854 Standard for Radix-Independent Floating-Point Arithmetic is defined in PVS (Prototype Verification System). Since IEEE-854 is a generalization of the ANSI/IEEE-754 Standard for Binary Floating-Point Arithmetic, the definition of IEEE-854 in PVS also formally defines much of IEEE-754. This collection of PVS theories provides a basis for machine checked verification of floating-point systems. This formal definition illustrates that formal specification techniques are sufficiently advanced that is is reasonable to consider their use in the development of future standards.
Analysis of the Space Shuttle main engine simulation
NASA Technical Reports Server (NTRS)
Deabreu-Garcia, J. Alex; Welch, John T.
1993-01-01
This is a final report on an analysis of the Space Shuttle Main Engine Program, a digital simulator code written in Fortran. The research was undertaken in ultimate support of future design studies of a shuttle life-extending Intelligent Control System (ICS). These studies are to be conducted by NASA Lewis Space Research Center. The primary purpose of the analysis was to define the means to achieve a faster running simulation, and to determine if additional hardware would be necessary for speeding up simulations for the ICS project. In particular, the analysis was to consider the use of custom integrators based on the Matrix Stability Region Placement (MSRP) method. In addition to speed of execution, other qualities of the software were to be examined. Among these are the accuracy of computations, the useability of the simulation system, and the maintainability of the program and data files. Accuracy involves control of truncation error of the methods, and roundoff error induced by floating point operations. It also involves the requirement that the user be fully aware of the model that the simulator is implementing.
Active vibration control of a full scale aircraft wing using a reconfigurable controller
NASA Astrophysics Data System (ADS)
Prakash, Shashikala; Renjith Kumar, T. G.; Raja, S.; Dwarakanathan, D.; Subramani, H.; Karthikeyan, C.
2016-01-01
This work highlights the design of a Reconfigurable Active Vibration Control (AVC) System for aircraft structures using adaptive techniques. The AVC system with a multichannel capability is realized using Filtered-X Least Mean Square algorithm (FxLMS) on Xilinx Virtex-4 Field Programmable Gate Array (FPGA) platform in Very High Speed Integrated Circuits Hardware Description Language, (VHDL). The HDL design is made based on Finite State Machine (FSM) model with Floating point Intellectual Property (IP) cores for arithmetic operations. The use of FPGA facilitates to modify the system parameters even during runtime depending on the changes in user's requirements. The locations of the control actuators are optimized based on dynamic modal strain approach using genetic algorithm (GA). The developed system has been successfully deployed for the AVC testing of the full-scale wing of an all composite two seater transport aircraft. Several closed loop configurations like single channel and multi-channel control have been tested. The experimental results from the studies presented here are very encouraging. They demonstrate the usefulness of the system's reconfigurability for real time applications.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-01-01
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813
Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-06-24
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.
Verification of floating-point software
NASA Technical Reports Server (NTRS)
Hoover, Doug N.
1990-01-01
Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) For each emission point included in an emissions average, the owner or operator shall perform testing, monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... internal floating roof, external roof, or a closed vent system with a control device, as appropriate to the...
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
Evaluation of GPUs as a level-1 track trigger for the High-Luminosity LHC
NASA Astrophysics Data System (ADS)
Mohr, H.; Dritschler, T.; Ardila, L. E.; Balzer, M.; Caselle, M.; Chilingaryan, S.; Kopmann, A.; Rota, L.; Schuh, T.; Vogelgesang, M.; Weber, M.
2017-04-01
In this work, we investigate the use of GPUs as a way of realizing a low-latency, high-throughput track trigger, using CMS as a showcase example. The CMS detector at the Large Hadron Collider (LHC) will undergo a major upgrade after the long shutdown from 2024 to 2026 when it will enter the high luminosity era. During this upgrade, the silicon tracker will have to be completely replaced. In the High Luminosity operation mode, luminosities of 5-7 × 1034 cm-2s-1 and pileups averaging at 140 events, with a maximum of up to 200 events, will be reached. These changes will require a major update of the triggering system. The demonstrated systems rely on dedicated hardware such as associative memory ASICs and FPGAs. We investigate the use of GPUs as an alternative way of realizing the requirements of the L1 track trigger. To this end we implemeted a Hough transformation track finding step on GPUs and established a low-latency RDMA connection using the PCIe bus. To showcase the benefits of floating point operations, made possible by the use of GPUs, we present a modified algorithm. It uses hexagonal bins for the parameter space and leads to a more truthful representation of the possible track parameters of the individual hits in Hough space. This leads to fewer duplicate candidates and reduces fake track candidates compared to the regular approach. With data-transfer latencies of 2 μs and processing times for the Hough transformation as low as 3.6 μs, we can show that latencies are not as critical as expected. However, computing throughput proves to be challenging due to hardware limitations.
Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian
2003-01-01
The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.
2015-01-01
crafts on floating ice sheets near McMurdo, Antarctica (Katona and Vaudrey 1973; Katona 1974; Vaudrey 1977). To comply with the first criterion, one...Nomographs for operating wheeled aircraft on sea- ice runways: McMurdo Station, Antarctica . In Proceedings of the Offshore Mechanics and Arctic Engineering... Ice Thickness Requirements for Vehicles and Heavy Equipment at McMurdo Station, Antarctica . CRREL Project Report 04- 09, “Safe Sea Ice for Vehicle
Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics
NASA Astrophysics Data System (ADS)
Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie
2008-08-01
Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Apparatus and method for implementing power saving techniques when processing floating point values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young Moon; Park, Sang Phill
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
A simplified Integer Cosine Transform and its application in image compression
NASA Technical Reports Server (NTRS)
Costa, M.; Tong, K.
1994-01-01
A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.
An Investigation of a Design for a Finite-Difference Time Domain (FDTD) Hardware Accelerator
1991-12-01
D PTR), accumulators A and B ( ACCA & ACCB), and the third fixed incrementer (IN3). The stack file in the floating-point unit is untouched. The first...of data. REGISTERS: R1, R2, R4, R5, R7, R8, R9, R11, R12, R13, ACCA , ACCB, MBR. MAR, STAT POINTERS: APT, BPT, CPT, DPT, AIN, BIN, CIN, DIN, IN3 LINES...BBUS MARh-2 READ BACTL, 12 R2 =En(lj) R2=D MAR+2 READ BAACT; 13 MBR = En+I(1j) ACCA = En(0,j-1) + En(lj-1) BU=R2 BL=R2 CD C=R1 MBR=D FP++ a=CBUS b=BBUS
An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics
Gupta, Priti; Markan, C. M.
2014-01-01
The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID:24765062
Fine pointing control for free-space optical communication
NASA Technical Reports Server (NTRS)
Portillo, A. A.; Ortiz, G. G.; Racho, C.
2000-01-01
Free-Space Optical Communications requires precise, stable laser pointing to maintain operating conditions. This paper also describes the software and hardware implementation of Fine Pointing Control based on the Optical Communications Demonstrator architecture.
Floating-point performance of ARM cores and their efficiency in classical molecular dynamics
NASA Astrophysics Data System (ADS)
Nikolskiy, V.; Stegailov, V.
2016-02-01
Supercomputing of the exascale era is going to be inevitably limited by power efficiency. Nowadays different possible variants of CPU architectures are considered. Recently the development of ARM processors has come to the point when their floating point performance can be seriously considered for a range of scientific applications. In this work we present the analysis of the floating point performance of the latest ARM cores and their efficiency for the algorithms of classical molecular dynamics.
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
A universal data access and protocol integration mechanism for smart home
NASA Astrophysics Data System (ADS)
Shao, Pengfei; Yang, Qi; Zhang, Xuan
2013-03-01
With the lack of standardized or completely missing communication interfaces in home electronics, there is no perfect solution to address every aspect in smart homes based on existing protocols and technologies. In addition, the central control unit (CCU) of smart home system working point-to-point between the multiple application interfaces and the underlying hardware interfaces leads to its complicated architecture and unpleasant performance. A flexible data access and protocol integration mechanism is required. The current paper offers a universal, comprehensive data access and protocol integration mechanism for a smart home. The universal mechanism works as a middleware adapter with unified agreements of the communication interfaces and protocols, offers an abstraction of the application level from the hardware specific and decoupling the hardware interface modules from the application level. Further abstraction for the application interfaces and the underlying hardware interfaces are executed based on adaption layer to provide unified interfaces for more flexible user applications and hardware protocol integration. This new universal mechanism fundamentally changes the architecture of the smart home and in some way meets the practical requirement of smart homes more flexible and desirable.
46 CFR 160.027-3 - Additional requirements for life floats.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 6 2010-10-01 2010-10-01 false Additional requirements for life floats. 160.027-3..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Life Floats for Merchant Vessels § 160.027-3 Additional requirements for life floats. (a) Each life float must have a platform designed...
46 CFR 160.027-3 - Additional requirements for life floats.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 6 2011-10-01 2011-10-01 false Additional requirements for life floats. 160.027-3..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Life Floats for Merchant Vessels § 160.027-3 Additional requirements for life floats. (a) Each life float must have a platform designed...
46 CFR 160.027-3 - Additional requirements for life floats.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 6 2014-10-01 2014-10-01 false Additional requirements for life floats. 160.027-3..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Life Floats for Merchant Vessels § 160.027-3 Additional requirements for life floats. (a) Each life float must have a platform designed...
46 CFR 160.027-3 - Additional requirements for life floats.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 6 2013-10-01 2013-10-01 false Additional requirements for life floats. 160.027-3..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Life Floats for Merchant Vessels § 160.027-3 Additional requirements for life floats. (a) Each life float must have a platform designed...
46 CFR 160.027-3 - Additional requirements for life floats.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 6 2012-10-01 2012-10-01 false Additional requirements for life floats. 160.027-3..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Life Floats for Merchant Vessels § 160.027-3 Additional requirements for life floats. (a) Each life float must have a platform designed...
Li, Xin-Wei; Shao, Xiao-Mei; Tan, Ke-Ping; Fang, Jian-Qiao
2013-04-01
To compare the efficacy difference in the treatment of supraspinous ligament injury between floating acupuncture at Tianying point and the conventional warm needling therapy. Ninety patients were randomized into a floating acupuncture group and a warm needling group, 45 cases in each one. In the floating acupuncture group, the floating needling technique was adopted at Tianying point. In the warm needling group, the conventional warm needling therapy was applied at Tianying point as the chief point in the prescription. The treatment was given 3 times a week and 6 treatments made one session. The visual analogue scale (VAS) was adopted for pain comparison before and after treatment of the patients in two groups and the efficacy in two groups were assessed. The curative and remarkably effective rate was 81.8% (36/44) in the floating acupuncture group and the total effective rate was 95.5% (42/44), which were superior to 44.2% (19/43) and 79.1% (34/43) in the warm needling group separately (P < 0.01, P < 0.05). VAS score was lower as compared with that before treatment of the patients in two groups (both P < 0.01) and the score in the floating acupuncture group was lower than that in the warm needling group after treatment (P < 0.01). Thirty-six cases were cured and remarkably effective in the floating acupuncture group after treatment, in which 28 cases were cured and remarkably effective in 3 treatments, accounting for 77.8 (28/36), which was apparently higher than 26.3 (5/19) in the warm-needling group (P < 0.01). The floating acupuncture at Tianying point achieves the quick and definite efficacy on supraspinous ligament injury and presents the apparent analgesic effect. The efficacy is superior to the conventional warm-needling therapy.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
MSFC Skylab attitude and pointing control system mission evaluation
NASA Technical Reports Server (NTRS)
Chubb, W. B.
1974-01-01
The results of detailed performance analyses of the attitude and pointing control system in-orbit hardware and software on Skylab are reported. Performance is compared with requirements, test results, and prelaunch predictions. A brief history of the altitude and pointing control system evolution leading to the launch configuration is presented. The report states that the attitude and pointing system satisfied all requirements.
NASA Technical Reports Server (NTRS)
Parkinson, J B; HOUSE R O
1938-01-01
Tests were made in the NACA tank and in the NACA 7 by 10 foot wind tunnel on two models of transverse step floats and three models of pointed step floats considered to be suitable for use with single float seaplanes. The object of the program was the reduction of water resistance and spray of single float seaplanes without reducing the angle of dead rise believed to be necessary for the satisfactory absorption of the shock loads. The results indicated that all the models have less resistance and spray than the model of the Mark V float and that the pointed step floats are somewhat superior to the transverse step floats in these respects. Models 41-D, 61-A, and 73 were tested by the general method over a wide range of loads and speeds. The results are presented in the form of curves and charts for use in design calculations.
A hardware implementation of the discrete Pascal transform for image processing
NASA Astrophysics Data System (ADS)
Goodman, Thomas J.; Aburdene, Maurice F.
2006-02-01
The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
Algorithm XXX : functions to support the IEEE standard for binary floating-point arithmetic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cody, W. J.; Mathematics and Computer Science
1993-12-01
This paper describes C programs for the support functions copysign(x,y), logb(x), scalb(x,n), nextafter(x,y), finite(x), and isnan(x) recommended in the Appendix to the IEEE Standard for Binary Floating-Point Arithmetic. In the case of logb, the modified definition given in the later IEEE Standard for Radix-Independent Floating-Point Arithmetic is followed. These programs should run without modification on most systems conforming to the binary standard.
46 CFR 46.10-45 - Nonsubmergence subdivision load lines in salt water.
Code of Federal Regulations, 2010 CFR
2010-10-01
... which the vessel is floating but not for the weight of fuel, water, etc., required for consumption between the point of departure and the open sea, and no allowance is to be made for bilge or ballast water...
Efficient volume computation for three-dimensional hexahedral cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dukowicz, J.K.
1988-02-01
Currently, algorithms for computing the volume of hexahedral cells with ''ruled'' surfaces require a minimum of 122 FLOPs (floating point operations) per cell. A new algorithm is described which reduces the operation count to 57 FLOPs per cell. copyright 1988 Academic Press, Inc.
Scientific Application Requirements for Leadership Computing at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahern, Sean; Alam, Sadaf R; Fahey, Mark R
2007-12-01
The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, andmore » analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a relatively small increase in performance per core with a dramatic increase in the number of cores. Leadership system software must face and overcome issues that will undoubtedly be exacerbated at the exascale. The operating system (OS) must be as unobtrusive as possible and possess more stability, reliability, and fault tolerance during application execution. As applications will be more likely at the exascale to experience loss of resources during an execution, the OS must mitigate such a loss with a range of responses. New fault tolerance paradigms must be developed and integrated into applications. Just as application input and output must not be an afterthought in hardware design, job management, too, must not be an afterthought in system software design. Efficient scheduling of those resources will be a major obstacle faced by leadership computing centers at the exas...« less
Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2
NASA Astrophysics Data System (ADS)
Makar, Robert J.; O'Toole, Brian E.
1998-07-01
An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.
Robust Fuzzy Controllers Using FPGAs
NASA Technical Reports Server (NTRS)
Monroe, Author Gene S., Jr.
2007-01-01
Electro-mechanical device controllers typically come in one of three forms, proportional (P), Proportional Derivative (PD), and Proportional Integral Derivative (PID). Two methods of control are discussed in this paper; they are (1) the classical technique that requires an in-depth mathematical use of poles and zeros, and (2) the fuzzy logic (FL) technique that is similar to the way humans think and make decisions. FL controllers are used in multiple industries; examples include control engineering, computer vision, pattern recognition, statistics, and data analysis. Presented is a study on the development of a PD motor controller written in very high speed hardware description language (VHDL), and implemented in FL. Four distinct abstractions compose the FL controller, they are the fuzzifier, the rule-base, the fuzzy inference system (FIS), and the defuzzifier. FL is similar to, but different from, Boolean logic; where the output value may be equal to 0 or 1, but it could also be equal to any decimal value between them. This controller is unique because of its VHDL implementation, which uses integer mathematics. To compensate for VHDL's inability to synthesis floating point numbers, a scale factor equal to 10(sup (N/4) is utilized; where N is equal to data word size. The scaling factor shifts the decimal digits to the left of the decimal point for increased precision. PD controllers are ideal for use with servo motors, where position control is effective. This paper discusses control methods for motion-base platforms where a constant velocity equivalent to a spectral resolution of 0.25 cm(exp -1) is required; however, the control capability of this controller extends to various other platforms.
NASA Astrophysics Data System (ADS)
Zasso, A.; Argentini, T.; Bayati, I.; Belloli, M.; Rocchi, D.
2017-12-01
The super long fjord crossings in E39 Norwegian project pose new challenges to long span bridge design and construction technology. Proposed solutions should consider the adoption of bridge deck with super long spans or floating solutions for at least one of the towers, due to the relevant fjord depth. At the same time, the exposed fjord environment, possibly facing the open ocean, calls for higher aerodynamic stability performances. In relation to this scenario, the present paper addresses two topics: 1) the aerodynamic advantages of multi-box deck sections in terms of aeroelastic stability, and 2) an experimental setup in a wind tunnel able to simulate the aeroelastic bridge response including the wave forcing on the floating.
On the design of a radix-10 online floating-point multiplier
NASA Astrophysics Data System (ADS)
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications
NASA Astrophysics Data System (ADS)
Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.
2012-08-01
The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.
Data reduction programs for a laser radar system
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Copeland, G. E.
1984-01-01
The listing and description of software routines which were used to analyze the analog data obtained from LIDAR - system are given. All routines are written in FORTRAN - IV on a HP - 1000/F minicomputer which serves as the heart of the data acquisition system for the LIDAR program. This particular system has 128 kilobytes of highspeed memory and is equipped with a Vector Instruction Set (VIS) firmware package, which is used in all the routines, to handle quick execution of different long loops. The system handles floating point arithmetic in hardware in order to enhance the speed of execution. This computer is a 2177 C/F series version of HP - 1000 RTE-IVB data acquisition computer system which is designed for real time data capture/analysis and disk/tape mass storage environment.
40 CFR 426.50 - Applicability; description of the float glass manufacturing subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... float glass manufacturing subcategory. 426.50 Section 426.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.50 Applicability; description of the float glass...
40 CFR 426.50 - Applicability; description of the float glass manufacturing subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... float glass manufacturing subcategory. 426.50 Section 426.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.50 Applicability; description of the float glass...
2010-04-08
S131-E-008357 (9 April 2010) --- NASA astronaut Dorothy Metcalf-Lindenburger, STS-131 mission specialist, finds floating room hard to come by inside the multi-purpose logistics module Leonardo, which is filled with supplies and hardware for the International Space Station, to which it is temporarily docked.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Program Correctness, Verification and Testing for Exascale (Corvette)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Koushik; Iancu, Costin; Demmel, James W
The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less
Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
NASA Technical Reports Server (NTRS)
Logan, Cory; Maida, James; Goldsby, Michael; Clark, Jim; Wu, Liew; Prenger, Henk
1993-01-01
The Space Station Freedom (SSF) Data Management System (DMS) consists of distributed hardware and software which monitor and control the many onboard systems. Virtual environment and off-the-shelf computer technologies can be used at critical points in project development to aid in objectives and requirements development. Geometric models (images) coupled with off-the-shelf hardware and software technologies were used in The Space Station Mockup and Trainer Facility (SSMTF) Crew Operational Assessment Project. Rapid prototyping is shown to be a valuable tool for operational procedure and system hardware and software requirements development. The project objectives, hardware and software technologies used, data gained, current activities, future development and training objectives shall be discussed. The importance of defining prototyping objectives and staying focused while maintaining schedules are discussed along with project pitfalls.
Recent advances in lossy compression of scientific floating-point data
NASA Astrophysics Data System (ADS)
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
50 CFR 679.94 - Economic data report (EDR) for the Amendment 80 sector.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: NMFS, Alaska Fisheries Science Center, Economic Data Reports, 7600 Sand Point Way NE, F/AKC2, Seattle... Operation Description of code Code NMFS Alaska region ADF&G FCP Catcher/processor Floating catcher processor. FLD Mothership Floating domestic mothership. IFP Stationary Floating Processor Inshore floating...
Hardware Development Process for Human Research Facility Applications
NASA Technical Reports Server (NTRS)
Bauer, Liz
2000-01-01
The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as well as modifications needed to meet program requirements. Options are consolidated and the hardware development team reaches a hardware development decision point. Within budget and schedule constraints, the team must decide whether or not to complete the hardware as an in-house, subcontract with vendor, or commercial-off-the-shelf (COTS) development. An in-house development indicates NASA personnel or a contractor builds the hardware at a NASA site. A subcontract development is completed off-site by a commercial company. A COTS item is a vendor product available by ordering a specific part number. The team evaluates the pros and cons of each development path. For example, in-bouse developments utilize existing corporate knowledge regarding bow to build equipment for use in space. However, technical expertise would be required to fully understand the medical equipment capabilities, such as for an ultrasound system. It may require additional time and funding to gain the expertise that commercially exists. The major benefit of subcontracting a hardware development is the product is delivered as an end-item and commercial expertise is utilized. On the other hand, NASA has limited control over schedule delays. The final option of COTS or modified COTS equipment is a compromise between in-house and subcontracts. A vendor product may exist that meets all functional requirements but req uires in-house modifications for successful operation in a space environment. The HRF utilizes equipment developed using all of the paths described: inhouse, subcontract, and modified COTS.
50 CFR 86.13 - What is boating infrastructure?
Code of Federal Regulations, 2010 CFR
2010-10-01
..., currents, etc., that provide a temporary safe anchorage point or harbor of refuge during storms); (f) Floating docks and fixed piers; (g) Floating and fixed breakwaters; (h) Dinghy docks (floating or fixed...
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
EOS-AM precision pointing verification
NASA Technical Reports Server (NTRS)
Throckmorton, A.; Braknis, E.; Bolek, J.
1993-01-01
The Earth Observing System (EOS) AM mission requires tight pointing knowledge to meet scientific objectives, in a spacecraft with low frequency flexible appendage modes. As the spacecraft controller reacts to various disturbance sources and as the inherent appendage modes are excited by this control action, verification of precision pointing knowledge becomes particularly challenging for the EOS-AM mission. As presently conceived, this verification includes a complementary set of multi-disciplinary analyses, hardware tests and real-time computer in the loop simulations, followed by collection and analysis of hardware test and flight data and supported by a comprehensive data base repository for validated program values.
Microsoft Producer: A Software Tool for Creating Multimedia PowerPoint[R] Presentations
ERIC Educational Resources Information Center
Leffingwell, Thad R.; Thomas, David G.; Elliott, William H.
2007-01-01
Microsoft[R] Producer[R] is a powerful yet user-friendly PowerPoint companion tool for creating on-demand multimedia presentations. Instructors can easily distribute these presentations via compact disc or streaming media over the Internet. We describe the features of the software, system requirements, and other required hardware. We also describe…
Crawford, D C; Bell, D S; Bamber, J C
1993-01-01
A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.
ERIC Educational Resources Information Center
Baehr, Marie
1994-01-01
Provides a problem where students are asked to find the point at which a soda can floating in some liquid changes its equilibrium between stable and unstable as the soda is removed from the can. Requires use of Newton's first law, center of mass, Archimedes' principle, stable and unstable equilibrium, and buoyant force position. (MVL)
Rapid Design of Gravity Assist Trajectories
NASA Technical Reports Server (NTRS)
Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.
1991-01-01
Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.
Space shuttle low cost/risk avionics study
NASA Technical Reports Server (NTRS)
1971-01-01
All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.
Performance Analysis of GYRO: A Tool Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worley, P.; Roth, P.; Candy, J.
2005-06-26
The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wall clock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manualmore » analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.« less
33 CFR 147.815 - ExxonMobil Hoover Floating OCS Facility safety zone.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false ExxonMobil Hoover Floating OCS... Floating OCS Facility safety zone. (a) Description. The ExxonMobil Hoover Floating OCS Facility, Alaminos... (1640.4 feet) from each point on the structure's outer edge is a safety zone. (b) Regulation. No vessel...
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
Single crystal growth of 67%BiFeO 3 -33%BaTiO 3 solution by the floating zone method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rong, Y.; Zheng, H.; Krogstad, M. J.
The growth conditions and the resultant grain morphologies and phase purities from floating-zone growth of 67%BiFeO3-33%BaTiO3 (BF-33BT) single crystals are reported. We find two formidable challenges for the growth. First, a low-melting point constituent leads to a pre-melt zone in the feed-rod that adversely affects growth stability. Second, constitutional super-cooling (CSC), which was found to lead to dendritic and columnar features in the grain morphology, necessitates slow traveling rates during growth. Both challenges were addressed by modifications to the floating-zone furnace that steepened the temperature gradient at the melt-solid interfaces. Slow growth was also required to counter the effects ofmore » CSC. Single crystals with typical dimensions of hundreds of microns have been obtained which possess high quality and are suitable for detailed structural studies.« less
Single crystal growth of 67%BiFeO3-33%BaTiO3 solution by the floating zone method
NASA Astrophysics Data System (ADS)
Rong, Y.; Zheng, H.; Krogstad, M. J.; Mitchell, J. F.; Phelan, D.
2018-01-01
The growth conditions and the resultant grain morphologies and phase purities from floating-zone growth of 67%BiFeO3-33%BaTiO3 (BF-33BT) single crystals are reported. We find two formidable challenges for the growth. First, a low-melting point constituent leads to a pre-melt zone in the feed-rod that adversely affects growth stability. Second, constitutional super-cooling (CSC), which was found to lead to dendritic and columnar features in the grain morphology, necessitates slow traveling rates during growth. Both challenges were addressed by modifications to the floating-zone furnace that steepened the temperature gradient at the melt-solid interfaces. Slow growth was also required to counter the effects of CSC. Single crystals with typical dimensions of hundreds of microns have been obtained which possess high quality and are suitable for detailed structural studies.
2008-11-26
S126-E-011534 (26 Nov. 2008) --- Astronaut Eric Boe, STS-126 pilot, floats near the hatchway of the multi-purpose logistics module Leonardo, temporarily docked with the International Space Station to aid in the transfer of supplies and hardware. Leonardo, like Boe and the rest of the Endeavour crew, will return to Earth over the coming weekend.
Analysis of Static Spacecraft Floating Potential at Low Earth Orbit (LEO)
NASA Technical Reports Server (NTRS)
Herr, Joel L.; Hwang, K. S.; Wu, S. T.
1995-01-01
Spacecraft floating potential is the charge on the external surfaces of orbiting spacecraft relative to the space. Charging is caused by unequal negative and positive currents to spacecraft surfaces. The charging process continues until the accelerated particles can be collected rapidly enough to balance the currents at which point the spacecraft has reached its equilibrium or floating potential. In low inclination. Low Earth Orbit (LEO), the collection of positive ion and negative electrons. in a particular direction. are typically not equal. The level of charging required for equilibrium to be established is influenced by the characteristics of the ambient plasma environment. by the spacecraft motion, and by the geometry of the spacecraft. Using the kinetic theory, a statistical approach for studying the interaction is developed. The approach used to study the spacecraft floating potential depends on which phenomena are being applied. and on the properties of the plasma. especially the density and temperature. The results from kinetic theory derivation are applied to determine the charging level and the electric potential distribution at an infinite flat plate perpendicular to a streaming plasma using finite-difference scheme.
ERIC Educational Resources Information Center
Richardson, William H., Jr.
2006-01-01
Computational precision is sometimes given short shrift in a first programming course. Treating this topic requires discussing integer and floating-point number representations and inaccuracies that may result from their use. An example of a moderately simple programming problem from elementary statistics was examined. It forced students to…
Code of Federal Regulations, 2010 CFR
2010-07-01
...(h)(2); or (b) Equip with a floating roof that meets the equipment specifications of § 60.693(a)(1)(i... and other points of access to a conveyance system. c A fixed roof may have openings necessary for...
40 CFR 65.44 - External floating roof (EFR).
Code of Federal Regulations, 2010 CFR
2010-07-01
... design requirements. The owner or operator who elects to control storage vessel regulated material emissions by using an external floating roof shall comply with the design requirements listed in paragraphs (a)(1) through (3) of this section. (1) The external floating roof shall be designed to float on the...
40 CFR 65.44 - External floating roof (EFR).
Code of Federal Regulations, 2011 CFR
2011-07-01
... design requirements. The owner or operator who elects to control storage vessel regulated material emissions by using an external floating roof shall comply with the design requirements listed in paragraphs (a)(1) through (3) of this section. (1) The external floating roof shall be designed to float on the...
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Laparoscopic surgery in weightlessness
NASA Technical Reports Server (NTRS)
Campbell, M. R.; Billica, R. D.; Jennings, R.; Johnston, S. 3rd
1996-01-01
BACKGROUND: Performing a surgical procedure in weightlessness has been shown not to be any more difficult than in a 1g environment if the requirements for the restraint of the patient, operator, and surgical hardware are observed. The feasibility of performing a laparoscopic surgical procedure in weightlessness, however, has been questionable. Concerns have included the impaired visualization from the lack of gravitational retraction of the bowel and from floating debris such as blood. METHODS: In this project, laparoscopic surgery was performed on a porcine animal model in the weightlessness of parabolic flight. RESULTS: Visualization was unaffected due to the tethering of the bowel by the elastic mesentery and the strong tendency for debris and blood to adhere to the abdominal wall due to surface tension forces. CONCLUSIONS: There are advantages to performing a laparoscopic instead of an open surgical procedure in a weightless environment. These will become important as the laparoscopic support hardware is miniaturized from its present form, as laparoscopic technology becomes more advanced, and as more surgically capable crew medical officers are present in future long-duration space-exploration missions.
Instabilities caused by floating-point arithmetic quantization.
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1972-01-01
It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.
Earth to Moon Transfer: Direct vs Via Libration Points (L1, L2)
NASA Technical Reports Server (NTRS)
Condon, Gerald L.; Wilson, Samuel W.
2004-01-01
For some three decades, the Apollo-style mission has served as a proven baseline technique for transporting flight crews to the Moon and back with expendable hardware. This approach provides an optimal design for expeditionary missions, emphasizing operational flexibility in terms of safely returning the crew in the event of a hardware failure. However, its application is limited essentially to low-latitude lunar sites, and it leaves much to be desired as a model for exploratory and evolutionary programs that employ reusable space-based hardware. This study compares the performance requirements for a lunar orbit rendezvous mission type with one using the cislunar libration point (L1) as a stopover and staging point for access to arbitrary sites on the lunar surface. For selected constraints and mission objectives, it contrasts the relative uniformity of performance cost when the L1 staging point is used with the wide variation of cost for the Apollo-style lunar orbit rendezvous.
Optimized Latching Control of Floating Point Absorber Wave Energy Converter
NASA Astrophysics Data System (ADS)
Gadodia, Chaitanya; Shandilya, Shubham; Bansal, Hari Om
2018-03-01
There is an increasing demand for energy in today’s world. Currently main energy resources are fossil fuels, which will eventually drain out, also the emissions produced from them contribute to global warming. For a sustainable future, these fossil fuels should be replaced with renewable and green energy sources. Sea waves are a gigantic and undiscovered vitality asset. The potential for extricating energy from waves is extensive. To trap this energy, wave energy converters (WEC) are needed. There is a need for increasing the energy output and decreasing the cost requirement of these existing WECs. This paper presents a method which uses prediction as a part of the control scheme to increase the energy efficiency of the floating-point absorber WECs. Kalman Filter is used for estimation, coupled with latching control in regular as well as irregular sea waves. Modelling and Simulation results for the same are also included.
Real-time simulation of thermal shadows with EMIT
NASA Astrophysics Data System (ADS)
Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul
2016-05-01
Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.
Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars
NASA Astrophysics Data System (ADS)
Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.
2011-12-01
Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development, simulation, and synthesis of two different OBP architectures, we have proven the feasibility and efficacy of an OBP for planetary ice-penetrating radars.
A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2016-12-06
This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.
A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method
Tuta, Jure; Juric, Matjaz B.
2016-01-01
This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453
Non-uniqueness of the point of application of the buoyancy force
NASA Astrophysics Data System (ADS)
Kliava, Janis; Mégel, Jacques
2010-07-01
Even though the buoyancy force (also known as the Archimedes force) has always been an important topic of academic studies in physics, its point of application has not been explicitly identified yet. We present a quantitative approach to this problem based on the concept of the hydrostatic energy, considered here for a general shape of the cross-section of a floating body and for an arbitrary angle of heel. We show that the location of the point of application of the buoyancy force essentially depends (i) on the type of motion experienced by the floating body and (ii) on the definition of this point. In a rolling/pitching motion, considerations involving the rotational moment lead to a particular dynamical point of application of the buoyancy force, and for some simple shapes of the floating body this point coincides with the well-known metacentre. On the other hand, from the work-energy relation it follows that in the rolling/pitching motion the energetical point of application of this force is rigidly connected to the centre of buoyancy; in contrast, in a vertical translation this point is rigidly connected to the centre of gravity of the body. Finally, we consider the location of the characteristic points of the floating bodies for some particular shapes of immersed cross-sections. The paper is intended for higher education level physics teachers and students.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Verification of IEEE Compliant Subtractive Division Algorithms
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Leathrum, James F., Jr.
1996-01-01
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
VLSI Design Techniques for Floating-Point Computation
1988-11-18
J. C. Gibson, The Gibson Mix, IBM Systems Development Division Tech. Report(June 1970). [Heni83] A. Heninger, The Zilog Z8070 Floating-Point...Broadcast Oock Gen. ’ itp Divide Module Module byN Module Oock Communication l I T Oock Communication Bus Figure 7.2. Clock Distribution between
Media processors using a new microsystem architecture designed for the Internet era
NASA Astrophysics Data System (ADS)
Wyland, David C.
1999-12-01
The demands of digital image processing, communications and multimedia applications are growing more rapidly than traditional design methods can fulfill them. Previously, only custom hardware designs could provide the performance required to meet the demands of these applications. However, hardware design has reached a crisis point. Hardware design can no longer deliver a product with the required performance and cost in a reasonable time for a reasonable risk. Software based designs running on conventional processors can deliver working designs in a reasonable time and with low risk but cannot meet the performance requirements. What is needed is a media processing approach that combines very high performance, a simple programming model, complete programmability, short time to market and scalability. The Universal Micro System (UMS) is a solution to these problems. The UMS is a completely programmable (including I/O) system on a chip that combines hardware performance with the fast time to market, low cost and low risk of software designs.
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.
1991-01-01
Work was completed on all aspects of the following tasks: order of magnitude estimates; thermo-capillary convection - two-dimensional (fixed planar surface); thermo-capillary convection - three-dimensional and axisymmetric; liquid bridge/floating zone sensitivity; transport in closed containers; interaction: design and development stages; interaction: testing flight hardware; and reporting. Results are included in the Appendices.
Underway Recovery Test 6 (URT-6) - Day 1 Activities
2018-01-17
A test article of Orion floats in 6-feet of water in the well deck of the USS Anchorage. The NASA Recovery Team from Kennedy Space Center is working with the U.S. Navy to improve recovery procedures and hardware ahead of Orion’s next flight, Exploration Mission-1, when it splashes down in the Pacific Ocean.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-04
... floating processor landing reporting requirements; and to consolidate CQE Program eligibility by community... determine their annual reporting requirements. CQE Floating Processor Landing Report Requirements This action revises the recordkeeping and reporting regulations at Sec. 679.5(e) for CQE floating processors...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-06
... clarify the CQE floating processor landing reporting requirements; and to consolidate CQE Program... their annual reporting requirements. CQE Floating Processor Landing Report Requirements This action would revise the recordkeeping and reporting regulations at Sec. 679.5(e) for CQE floating processors...
40 CFR 63.695 - Inspection and monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... monitoring procedures required to perform the following: (1) To inspect tank fixed roofs and floating roofs... and floating roof inspection requirements. (1) Owners and operators that use a tank equipped with an internal floating roof in accordance with the provisions of § 63.685(e) of this subpart shall meet the...
Pushing the Limits of Cubesat Attitude Control: A Ground Demonstration
NASA Technical Reports Server (NTRS)
Sanders, Devon S.; Heater, Daniel L.; Peeples, Steven R.; Sules. James K.; Huang, Po-Hao Adam
2013-01-01
A cubesat attitude control system (ACS) was designed at the NASA Marshall Space Flight Center (MSFC) to provide sub-degree pointing capabilities using low cost, COTS attitude sensors, COTS miniature reaction wheels, and a developmental micro-propulsion system. The ACS sensors and actuators were integrated onto a 3D-printed plastic 3U cubesat breadboard (10 cm x 10 cm x 30 cm) with a custom designed instrument board and typical cubesat COTS hardware for the electrical, power, and data handling and processing systems. In addition to the cubesat development, a low-cost air bearing was designed and 3D printed in order to float the cubesat in the test environment. Systems integration and verification were performed at the MSFC Small Projects Rapid Integration & Test Environment laboratory. Using a combination of both the miniature reaction wheels and the micro-propulsion system, the open and closed loop control capabilities of the ACS were tested in the Flight Robotics Laboratory. The testing demonstrated the desired sub-degree pointing capability of the ACS and also revealed the challenges of creating a relevant environment for development testin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron
1992-01-01
Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.
Code of Federal Regulations, 2010 CFR
2010-07-01
... floating roof that meets the equipment specifications of § 60.693 (a)(1)(i), (a)(1)(ii), (a)(2), (a)(3... and other points of access to a conveyance system. c Applies to tanks with capacities of 38 m3 or...
NASA Technical Reports Server (NTRS)
Solis, Eduardo; Meyn, Larry
2016-01-01
Calibrating the internal, multi-component balance mounted in the Tiltrotor Test Rig (TTR) required photogrammetric measurements to determine the location and orientation of forces applied to the balance. The TTR, with the balance and calibration hardware attached, was mounted in a custom calibration stand. Calibration loads were applied using eleven hydraulic actuators, operating in tension only, that were attached to the forward frame of the calibration stand and the TTR calibration hardware via linkages with in-line load cells. Before the linkages were installed, photogrammetry was used to determine the location of the linkage attachment points on the forward frame and on the TTR calibration hardware. Photogrammetric measurements were used to determine the displacement of the linkage attachment points on the TTR due to deflection of the hardware under applied loads. These measurements represent the first photogrammetric deflection measurements to be made to support 6-component rotor balance calibration. This paper describes the design of the TTR and the calibration hardware, and presents the development, set-up and use of the photogrammetry system, along with some selected measurement results.
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
Hall, Matthew; Goupee, Andrew; Jonkman, Jason
2017-08-24
Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Matthew; Goupee, Andrew; Jonkman, Jason
Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less
Automated culture system experiments hardware: developing test results and design solutions.
Freddi, M; Covini, M; Tenconi, C; Ricci, C; Caprioli, M; Cotronei, V
2002-07-01
The experiment proposed by Prof. Ricci University of Milan is funded by ASI with Laben as industrial Prime Contractor. ACS-EH (Automated Culture System-Experiment Hardware) will support the multigenerational experiment on weightlessness with rotifers and nematodes within four Experiment Containers (ECs) located inside the European Modular Cultivation System (EMCS) facility..Actually the Phase B is in progress and a concept design solution has been defined. The most challenging aspects for the design of such hardware are, from biological point of view the provision of an environment which permits animal's survival and to maintain desiccated generations separated and from the technical point of view, the miniaturisation of the hardware itself due to the reduce EC provided volume (160mmx60mmx60mm). The miniaturisation will allow a better use of the available EMCS Facility resources (e.g. volume. power etc.) and to fulfil the experiment requirements. ACS-EH, will be ready to fly in the year 2005 on boar the ISS.
Applications Performance Under MPL and MPI on NAS IBM SP2
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)
1994-01-01
On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.
NASA Astrophysics Data System (ADS)
Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry
2013-03-01
To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.; Vallely, D. P.
1978-01-01
This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Koga, Dennis (Technical Monitor)
2002-01-01
This presentation discuss NASA's proposed NETMARK knowledge management tool which aims 'to control and interoperate with every block in a document, email, spreadsheet, power point, database, etc. across the lifecycle'. Topics covered include: system software requirements and hardware requirements, seamless information systems, computer architecture issues, and potential benefits to NETMARK users.
1991-07-31
have floating-point type declarations requiring more digits than SYSTEM.MAXDIGITS: C24113L..Y (14 tests) C35705L..Y (14 tests) C357C6L..Y (14 tests...2_147_483_648..2_147_483_647; type FLOAT is digits 6 range -2#l.0#E128.. 2#0.IIIIIIIIIIIIIIIIIIIII#El28; type LONGFLOAT is digits 15 range -2#l.0#EI024.. 2...are instan- tated into libary packages or subprograms.) F-14 Appendix F of the Ada Reference Manual F.8.1. Address Clauses for Variables Address
40 CFR 264.1084 - Standards: Tanks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... internal floating roof in accordance with the requirements specified in paragraph (e) of this section; (2) A tank equipped with an external floating roof in accordance with the requirements specified in... operator who controls air pollutant emissions from a tank using a fixed roof with an internal floating roof...
Verification of Numerical Programs: From Real Numbers to Floating Point Numbers
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec
2013-01-01
Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.
Floating-Point Modules Targeted for Use with RC Compilation Tools
NASA Technical Reports Server (NTRS)
Sahin, Ibrahin; Gloster, Clay S.
2000-01-01
Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.
Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting
NASA Technical Reports Server (NTRS)
Abel, Phillip B.
1989-01-01
A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.
Rational Arithmetic in Floating-Point.
1986-09-01
RD-RI75 190 RATIONAL ARITHMETIC IN FLOTING-POINT(U) CALIFORNIA~UNIY BERKELEY CENTER FOR PURE AND APPLIED MATHEMATICS USI FE N KAHAN SEP 86 PRM-343...8217 ," .’,.-.’ .- " .- . ,,,.". ".. .. ". CENTER FOR PURE AND APPLIED MATHEMATICS UNIVERSITY OF CALIFORNIA, BERKELEY PAf4343 0l RATIONAL ARITHMIETIC IN FLOATING-POINT W. KAHAN SETMER18 SEPTEMBE...delicate balance between, on the one hand, the simplicity and aesthetic appeal of the specifications and, on the other hand, the complexity and
14 CFR 23.753 - Main float design.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Main float design. 23.753 Section 23.753... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Floats and Hulls § 23.753 Main float design. Each seaplane main float must meet the requirements of § 23.521. [Doc...
33 CFR 165.704 - Safety Zone; Tampa Bay, Florida.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Florida. (a) A floating safety zone is established consisting of an area 1000 yards fore and aft of a... ending at Gadsden Point Cut Lighted Buoys “3” and “4”. The safety zone starts again at Gadsden Point Cut... the marked channel at Tampa Bay Cut “K” buoy “11K” enroute to Rattlesnake, Tampa, FL, the floating...
Field experience with remote monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desrosiers, A.E.
1995-03-01
The Remote Monitoring System (RMS) is a combination of Merlin Gerin detection hardware, digital data communications hardware, and computer software from Bartlett Services, Inc. (BSI) that can improve the conduct of reactor plant operations in several areas. Using the RMS can reduce radiation exposures to radiation protection technicians (RPTs), reduce radiation exposures to plant maintenance and operations personnel, and reduce the time required to complete maintenance and inspections during outages. The number of temporary RPTs required during refueling outages can also be reduced. Data from use of the RMS at a two power plants are presented to illustrate these points.
Space Generic Open Avionics Architecture (SGOAA) reference model technical guide
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Zhou, Zhao-Hui; Zhuang, Li-Xing; Chen, Zhen-Hu; Lang, Jian-Ying; Li, Yan-Hui; Jiang, Gang-Hui; Xu, Zhan-Qiong; Liao, Mu-Xi
2014-07-01
To compare the clinical efficacy in the treatment of post-stroke shoulder-hand syndrome between floating-needle therapy and conventional acupuncture on the basis of rehabilitation training. One hundred cases of post-stroke shoulder-hand syndrome were randomized into a floating-needle group and an acupuncture group, 50 cases in each one. The passive and positive rehabilitation training was adopted in the two groups. Additionally, in the floating-needle group, the floating-needle therapy was used. The needle was inserted at the site 5 to 10 cm away from myofasical trigger point (MTrP), manipulated and scattered subcutaneously, for 2 min continuously. In the acupuncture group, the conventional acupuncture was applied at Jianqian (EX-UE), Jianyu (LI 15), Jianliao (TE 14), etc. The treatment was given once every two days, 3 times a week, and 14 days of treatment were required. The shoulder hand syndrome scale (SHSS), the short form McGill pain scale (SF-MPQ) and the modified Fugl-Meyer motor function scale (FMA) were used to evaluate the damage severity, pain and motor function of the upper limbs before and after treatment in the two groups. The clinical efficacy was compared between the two groups. SHSS score, SF-MPQ score and FMA score were improved significantly after treatment in the two groups (all P < 0.01), and the improvements in the floating-needle group were superior to those in the acupuncture group (all P < 0.05). The total effective rate was 94.0% (47/50) in the floating-needle group, which was better than 90.0% (45/50) in the acupuncture group (P < 0.05). The floating-needle therapy combined with rehabilitation training achieves a satisfactory efficacy on post-stroke shoulder-hand syndrome, which is better than the combined therapy of conventional acupuncture and rehabilitation training.
High-purity silicon crystal growth investigations
NASA Technical Reports Server (NTRS)
Ciszek, T. F.; Hurd, J. L.; Schuyler, T.
1985-01-01
The study of silicon sheet material requirements for high efficiency solar cells is reported. Research continued on obtaining long lifetime single crystal float zone silicon and on understanding and reducing the mechanisms that limit the achievement of long lifetimes. The mechanisms studied are impurities, thermal history, point defects, and surface effect. The lifetime related crystallographic defects are characterized by X-ray topography and electron beam induced current.
1991-09-27
complex floating-point functions in a fraction of the time used by the best supercomputers on the market today. These co-processing boards "piggy-back...by the VNIX-based DECLARE program. Ve’ ctLptieu du te, tedi the new verion with main programs that noi, include onlN the variablc required wkith each
High-Speed Systolic Array Testbed.
1987-10-01
applications since the concept was introduced by H.T. Kung In 1978. This highly parallel architecture of nearet neighbor data communciation and...must be addressed. For instance, should bit-serial or bit parallei computation be utilized. Does the dynamic range of the candidate applications or...numericai stability of the algorithms used require computations In fixed point and Integer format or the architecturally more complex and slower floating
Integrated Hardware and Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.
40 CFR 264.1085 - Standards: Surface impoundments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the surface impoundment by installing and operating either of the following: (1) A floating membrane... from a surface impoundment using a floating membrane cover shall meet the requirements specified in... floating membrane cover designed to meet the following specifications: (i) The floating membrane cover...
First incremental buy for Increment 2 of the Space Transportation System (STS)
NASA Technical Reports Server (NTRS)
1989-01-01
Thiokol manufactured and delivered 9 flight motors to KSC on schedule. All test flights were successful. All spent SRMs were recovered. Design, development, manufacture, and delivery of required transportation, handling, and checkout equipment to MSFC and to KSC were completed on schedule. All items of data required by DPD 400 were prepared and delivered as directed. In the system requirements and analysis area, the point of departure from Buy 1 to the operational phase was developed in significant detail with a complete set of transition documentation available. The documentation prepared during the Buy 1 program was maintained and updated where required. The following flight support activities should be continued through other production programs: as-built materials usage tracking on all flight hardware; mass properties reporting for all flight hardware until sample size is large enough to verify that the weight limit requirements were met; ballistic predictions and postflight performance assessments for all production flights; and recovered SRM hardware inspection and anomaly identification. In the safety, reliability, and quality assurance area, activities accomplished were assurance oriented in nature and specifically formulated to prevent problems and hardware failures. The flight program to date has adequately demonstrated the success of this assurance approach. The attention focused on details of design, analysis, manufacture, and inspection to assure the production of high-quality hardware has resulted in the absence of flight failures. The few anomalies which did occur were evaluated, design or manufacturing changes incorporated, and corrective actions taken to preclude recurrence.
Drift trajectories of a floating human body simulated in a hydraulic model of Puget Sound.
Ebbesmeyer, C C; Haglund, W D
1994-01-01
After a young man jumped off a 221-foot (67 meters) high bridge, the drift of the body that beached 20 miles (32 km) away at Alki Point in Seattle, Washington was simulated with a hydraulic model. Simulations for the appropriate time period were performed using a small floating bead to represent the body in the hydraulic model at the University of Washington. Bead movements were videotaped and transferred to Computer Aided Drafting (AutoCAD) charts on a personal computer. Because of strong tidal currents in the narrow passage under the bridge (The Narrows near Tacoma, WA), small changes in the time of the jump (+/- 30 minutes) made large differences in the distance the body traveled (30 miles; 48 km). Hydraulic and other types of oceanographic models may be located by contacting technical experts known as physical oceanographers at local universities, and can be utilized to demonstrate trajectories of floating objects and the time required to arrive at selected locations. Potential applications for forensic death investigators include: to be able to set geographic and time limits for searches; determine potential origin of remains found floating or beached; and confirm and correlate information regarding entry into the water and sightings of remains.
2011-02-26
CAPE CANAVERAL, Fla. -- The left spent booster from space shuttle Discovery's final launch is seen floating on the water's surface while pumps on Freedom Star, one of NASA's solid rocket booster retrieval ships, push debris and water out of the booster, replacing with air to facilitate floating for its return to Port Canaveral in Florida. The shuttle’s two solid rocket booster casings and associated flight hardware are recovered in the Atlantic Ocean after every launch by Liberty Star and Freedom Star. The boosters impact the Atlantic about seven minutes after liftoff and the retrieval ships are stationed about 10 miles from the impact area at the time of splashdown. After the spent segments are processed, they will be transported to Utah, where they will be refurbished and stored, if needed. Photo credit: NASA/Ben Smegelsky
40 CFR 265.1086 - Standards: Surface impoundments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... floating membrane cover in accordance with the provisions specified in paragraph (c) of this section; or (2... emissions from a surface impoundment using a floating membrane cover shall meet the requirements specified... with a floating membrane cover designed to meet the following specifications: (i) The floating membrane...
Pérez Suárez, Santiago T.; Travieso González, Carlos M.; Alonso Hernández, Jesús B.
2013-01-01
This article presents a design methodology for designing an artificial neural network as an equalizer for a binary signal. Firstly, the system is modelled in floating point format using Matlab. Afterward, the design is described for a Field Programmable Gate Array (FPGA) using fixed point format. The FPGA design is based on the System Generator from Xilinx, which is a design tool over Simulink of Matlab. System Generator allows one to design in a fast and flexible way. It uses low level details of the circuits and the functionality of the system can be fully tested. System Generator can be used to check the architecture and to analyse the effect of the number of bits on the system performance. Finally the System Generator design is compiled for the Xilinx Integrated System Environment (ISE) and the system is described using a hardware description language. In ISE the circuits are managed with high level details and physical performances are obtained. In the Conclusions section, some modifications are proposed to improve the methodology and to ensure portability across FPGA manufacturers.
Computation Directorate 2008 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2009-03-25
Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less
Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.
2011-01-01
Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
Interpretation of IEEE-854 floating-point standard and definition in the HOL system
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
1995-01-01
The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.
Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena
2009-01-30
results are scaled as floating point operations per second, obtained by counting the number of floating point additions and multiplications in the...black horizontal line. Perhaps the most striking feature at first is the fact that the memory bandwidth measured for flux lifting transcends this...theoretical peak performance values. For a suitable CPU-limited workload, this means that a single workstation equipped with multiple GPUs can do work that
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-04
... to the point of origin. The restricted area will be marked by a lighted and signed floating buoy line... a signed floating buoy line without permission from the Supervisor of Shipbuilding, Conversion and...
Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A
2012-01-01
Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.
Ayres, Daniel L.; Darling, Aaron; Zwickl, Derrick J.; Beerli, Peter; Holder, Mark T.; Lewis, Paul O.; Huelsenbeck, John P.; Ronquist, Fredrik; Swofford, David L.; Cummings, Michael P.; Rambaut, Andrew; Suchard, Marc A.
2012-01-01
Abstract Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software. PMID:21963610
Satellite services system program plan
NASA Technical Reports Server (NTRS)
Hoffman, Stephen J.
1985-01-01
The purpose is to determine the potential for servicing from the Space Shuttle Orbiter and to assess NASA's role as the catalyst in bringing about routine on-orbit servicing. Specifically this study seeks to determine what requirements, in terms of both funds and time, are needed to make the Shuttle Orbiter not only a transporter of spacecraft but a servicing vehicle for those spacecraft as well. The scope of this effort is to focus on the near term development of a generic servicing capability. To make this capability truly generic and attractive requires that the customer's point of veiw be taken and transformed into a widely usable set of hardware. And to maintain a near term advent of this capability requires that a minimal reliance be made on advanced technology. With this background and scope, this study will proceed through three general phases to arrive at the desired program costs and schedule. The first step will be to determine the servicing requirements of the user community. This will provide the basis for the second phase which is to develop hardware concepts to meet these needs. Finally, a cost estimate will be made for each of the new hardware concepts and a phased hardware development plan will be established for the acquisition of these items based on the inputs obtained from the user community.
Libpsht - algorithms for efficient spherical harmonic transforms
NASA Astrophysics Data System (ADS)
Reinecke, M.
2011-02-01
Libpsht (or "library for performant spherical harmonic transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports both transforms of scalars and spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP, and ECP). It will take advantage of hardware features such as multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time, as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2 and can be downloaded from .
Libpsht: Algorithms for Efficient Spherical Harmonic Transforms
NASA Astrophysics Data System (ADS)
Reinecke, Martin
2010-10-01
Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2. Development on this project has ended; its successor is libsharp (ascl:1402.033).
A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform
NASA Technical Reports Server (NTRS)
White, Michael J.
2004-01-01
This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.
Floating electrode dielectrophoresis.
Golan, Saar; Elata, David; Orenstein, Meir; Dinnar, Uri
2006-12-01
In practice, dielectrophoresis (DEP) devices are based on micropatterned electrodes. When subjected to applied voltages, the electrodes generate nonuniform electric fields that are necessary for the DEP manipulation of particles. In this study, electrically floating electrodes are used in DEP devices. It is demonstrated that effective DEP forces can be achieved by using floating electrodes. Additionally, DEP forces generated by floating electrodes are different from DEP forces generated by excited electrodes. The floating electrodes' capabilities are explained theoretically by calculating the electric field gradients and demonstrated experimentally by using test-devices. The test-devices show that floating electrodes can be used to collect erythrocytes (red blood cells). DEP devices which contain many floating electrodes ought to have fewer connections to external signal sources. Therefore, the use of floating electrodes may considerably facilitate the fabrication and operation of DEP devices. It can also reduce device dimensions. However, the key point is that DEP devices can integrate excited electrodes fabricated by microtechnology processes and floating electrodes fabricated by nanotechnology processes. Such integration is expected to promote the use of DEP devices in the manipulation of nanoparticles.
A novel grounded to floating admittance converter with electronic control
NASA Astrophysics Data System (ADS)
Prasad, Dinesh; Ahmad, Javed; Srivastava, Mayank
2018-01-01
This article suggests a new grounded to floating admittance convertor employing only two voltage differencing transconductance amplifiers (VDTAs). The proposed circuit can convert any arbitrary grounded admittance into floating admittance with electronically controllable scaling factor. The presented converter enjoys the following beneficial: (1) no requirement of any additional passive element (2) scaling factor can be tuned electronically through bias currents of VDTAs (3) no matching constraint required (4) low values of active/passive sensitivity indexes and (5) excellent non ideal behavior that indicates no deviation in circuit behavior even under non ideal environment. Application of the proposed configuration in realization of floating resistor and floating capacitor has been presented and the workability of these floating elements has been confirmed by active filter design examples. SPICE simulations have been performed to demonstrate the performance of the proposed circuits.
High-stability Shuttle pointing system
NASA Technical Reports Server (NTRS)
Van Riper, R.
1981-01-01
It was recognized that precision pointing provided by the Orbiter's attitude control system would not be good enough for Shuttle payload scientific experiments or certain Defense department payloads. The Annular Suspension Pointing System (ASPS) is being developed to satisfy these more exacting pointing requirements. The ASPS is a modular pointing system which consists of two principal parts, including an ASPS Gimbal System (AGS) which provides three conventional ball-bearing gimbals and an ASPS Vernier System (AVS) which magnetically isolates the payload. AGS performance requirements are discussed and an AGS system description is given. The overall AGS system consists of the mechanical hardware, sensors, electronics, and software. Attention is also given to system simulation and performance prediction, and support facilities.
Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware
Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose
2015-01-01
Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839
33 CFR 144.01-15 - Alternates for life floats.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Alternates for life floats. 144... for life floats. (a) Approved lifeboats, approved life rafts or approved inflatable life rafts may be used in lieu of approved life floats for either all or part of the capacity required. When either...
33 CFR 144.01-15 - Alternates for life floats.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Alternates for life floats. 144... for life floats. (a) Approved lifeboats, approved life rafts or approved inflatable life rafts may be used in lieu of approved life floats for either all or part of the capacity required. When either...
Term Cancellations in Computing Floating-Point Gröbner Bases
NASA Astrophysics Data System (ADS)
Sasaki, Tateaki; Kako, Fujio
We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.
Common Pitfalls in F77 Code Conversion
2003-02-01
implementation versus another are the source of these errors rather than typography . It is well to use the practice of commenting-out original source file lines...identifier), every I in the format field must be replaced with f followed by an appropriate floating point format designator . Floating point numeric...helps even more. Finally, libraries are a major source of non-portablility[sic], with graphics libraries one of the chief culprits. We in Fusion
NASA Technical Reports Server (NTRS)
Crump, William J.; Janik, Daniel S.; Thomas, L. Dale
1990-01-01
U.S. space missions have to this point used water either made on board or carried from earth and discarded after use. For Space Station Freedom, long duration life support will include air and water recycling using a series of physical-chemical subsystems. The Environmental Control and Life Support System (ECLSS) designed for this application must be tested extensively at all stages of hardware maturity. Human test subjects are required to conduct some of these tests, and the risks associated with the use of development hardware must be addressed. Federal guidelines for protection of human subjects require careful consideration of risks and potential benefits by an Institutional Review Board (IRB) before and during testing. This paper reviews the ethical principles guiding this consideration, details the problems and uncertainties inherent in current hardware testing, and presents an incremental approach to risk assessment for ECLSS testing.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Physical implication of transition voltage in organic nano-floating-gate nonvolatile memories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shun; Gao, Xu, E-mail: wangsd@suda.edu.cn, E-mail: gaoxu@suda.edu.cn; Zhong, Ya-Nan
High-performance pentacene-based organic field-effect transistor nonvolatile memories, using polystyrene as a tunneling dielectric and Au nanoparticles as a nano-floating-gate, show parallelogram-like transfer characteristics with a featured transition point. The transition voltage at the transition point corresponds to a threshold electric field in the tunneling dielectric, over which stored electrons in the nano-floating-gate will start to leak out. The transition voltage can be modulated depending on the bias configuration and device structure. For p-type active layers, optimized transition voltage should be on the negative side of but close to the reading voltage, which can simultaneously achieve a high ON/OFF ratio andmore » good memory retention.« less
Asynchronous Communication Scheme For Hypercube Computer
NASA Technical Reports Server (NTRS)
Madan, Herb S.
1988-01-01
Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.
Big Memory Elegance: HyperCard Information Processing and Desktop Publishing.
ERIC Educational Resources Information Center
Bitter, Gary G.; Gerson, Charles W., Jr.
1991-01-01
Discusses hardware requirements, functions, and applications of five information processing and desktop publishing software packages for the Macintosh: HyperCard, PageMaker, Cricket Presents, Power Point, and Adobe illustrator. Benefits of these programs for schools are considered. (MES)
NASA Astrophysics Data System (ADS)
Valdez, T.; Chao, Y.; Davis, R. E.; Jones, J.
2012-12-01
This talk will describe a new self-powered profiling float that can perform fast sampling over the upper ocean for long durations in support of a mesoscale ocean observing system in the Western North Pacific. The current state-of-the-art profiling floats can provide several hundreds profiles for the upper ocean every ten days. To quantify the role of the upper ocean in modulating the development of Typhoons requires at least an order of magnitude reduction for the sampling interval. With today's profiling float and battery technology, a fast sampling of one day or even a few hours will reduce the typical lifetime of profiling floats from years to months. Interactions between the ocean and typhoons often involves mesoscale eddies and fronts, which require a dense array of floats to reveal the 3-dimensional structure. To measure the mesoscale ocean over a large area like the Western North Pacific therefore requires a new technology that enables fast sampling and long duration at the same time. Harvesting the ocean renewable energy associated with the vertical temperature differentials has the potential to power profiling floats with fast sampling over long durations. Results from the development and deployment of a prototype self-powered profiling float (known as SOLO-TREC) will be presented. With eight hours sampling in the upper 500 meters, the upper ocean temperature and salinity reveal pronounced high frequency variations. Plans to use the SOLO-TREC technology in support of a dense array of fast sampling profiling floats in the Western North Pacific will be discussed.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
NASA Technical Reports Server (NTRS)
Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.
1991-01-01
Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.
Experimental high-speed network
NASA Astrophysics Data System (ADS)
McNeill, Kevin M.; Klein, William P.; Vercillo, Richard; Alsafadi, Yasser H.; Parra, Miguel V.; Dallas, William J.
1993-09-01
Many existing local area networking protocols currently applied in medical imaging were originally designed for relatively low-speed, low-volume networking. These protocols utilize small packet sizes appropriate for text based communication. Local area networks of this type typically provide raw bandwidth under 125 MHz. These older network technologies are not optimized for the low delay, high data traffic environment of a totally digital radiology department. Some current implementations use point-to-point links when greater bandwidth is required. However, the use of point-to-point communications for a total digital radiology department network presents many disadvantages. This paper describes work on an experimental multi-access local area network called XFT. The work includes the protocol specification, and the design and implementation of network interface hardware and software. The protocol specifies the Physical and Data Link layers (OSI layers 1 & 2) for a fiber-optic based token ring providing a raw bandwidth of 500 MHz. The protocol design and implementation of the XFT interface hardware includes many features to optimize image transfer and provide flexibility for additional future enhancements which include: a modular hardware design supporting easy portability to a variety of host system buses, a versatile message buffer design providing 16 MB of memory, and the capability to extend the raw bandwidth of the network to 3.0 GHz.
Terrain modeling for real-time simulation
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; McArthur, Donald E.
1993-10-01
There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.
Design and Optimization of Floating Drug Delivery System of Acyclovir
Kharia, A. A.; Hiremath, S. N.; Singhai, A. K.; Omray, L. K.; Jain, S. K.
2010-01-01
The purpose of the present work was to design and optimize floating drug delivery systems of acyclovir using psyllium husk and hydroxypropylmethylcellulose K4M as the polymers and sodium bicarbonate as a gas generating agent. The tablets were prepared by wet granulation method. A 32 full factorial design was used for optimization of drug release profile. The amount of psyllium husk (X1) and hydroxypropylmethylcellulose K4M (X2) were selected as independent variables. The times required for 50% (t50%) and 70% (t70%) drug dissolution were selected as dependent variables. All the designed nine batches of formulations were evaluated for hardness, friability, weight variation, drug content uniformity, swelling index, in vitro buoyancy, and in vitro drug release profile. All formulations had floating lag time below 3 min and constantly floated on dissolution medium for more than 24 h. Validity of the developed polynomial equation was verified by designing two check point formulations (C1 and C2). The closeness of predicted and observed values for t50% and t70% indicates validity of derived equations for the dependent variables. These studies indicated that the proper balance between psyllium husk and hydroxypropylmethylcellulose K4M can produce a drug dissolution profile similar to the predicted dissolution profile. The optimized formulations followed Higuchi's kinetics while the drug release mechanism was found to be anomalous type, controlled by diffusion through the swollen matrix. PMID:21694992
Design and optimization of floating drug delivery system of acyclovir.
Kharia, A A; Hiremath, S N; Singhai, A K; Omray, L K; Jain, S K
2010-09-01
The purpose of the present work was to design and optimize floating drug delivery systems of acyclovir using psyllium husk and hydroxypropylmethylcellulose K4M as the polymers and sodium bicarbonate as a gas generating agent. The tablets were prepared by wet granulation method. A 3(2) full factorial design was used for optimization of drug release profile. The amount of psyllium husk (X1) and hydroxypropylmethylcellulose K4M (X2) were selected as independent variables. The times required for 50% (t(50%)) and 70% (t(70%)) drug dissolution were selected as dependent variables. All the designed nine batches of formulations were evaluated for hardness, friability, weight variation, drug content uniformity, swelling index, in vitro buoyancy, and in vitro drug release profile. All formulations had floating lag time below 3 min and constantly floated on dissolution medium for more than 24 h. Validity of the developed polynomial equation was verified by designing two check point formulations (C1 and C2). The closeness of predicted and observed values for t(50%) and t(70%) indicates validity of derived equations for the dependent variables. These studies indicated that the proper balance between psyllium husk and hydroxypropylmethylcellulose K4M can produce a drug dissolution profile similar to the predicted dissolution profile. The optimized formulations followed Higuchi's kinetics while the drug release mechanism was found to be anomalous type, controlled by diffusion through the swollen matrix.
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
On the Floating Point Performance of the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1997-01-01
The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
GBU-X bounding requirements for highly flexible munitions
NASA Astrophysics Data System (ADS)
Bagby, Patrick T.; Shaver, Jonathan; White, Reed; Cafarelli, Sergio; Hébert, Anthony J.
2017-04-01
This paper will present the results of an investigation into requirements for existing software and hardware solutions for open digital communication architectures that support weapon subsystem integration. The underlying requirements of such a communication architecture would be to achieve the lowest latency possible at a reasonable cost point with respect to the mission objective of the weapon. The determination of the latency requirements of the open architecture software and hardware were derived through the use of control system and stability margins analyses. Studies were performed on the throughput and latency of different existing communication transport methods. The two architectures that were tested in this study include Data Distribution Service (DDS) and Modular Open Network Architecture (MONARCH). This paper defines what levels of latency can be achieved with current technology and how this capability may translate to future weapons. The requirements moving forward within communications solutions are discussed.
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, G. Scott
This floating-point arithmetic library contains a software implementation of Universal Numbers (unums) as described by John Gustafson [1]. The unum format is a superset of IEEE 754 floating point with several advantages. Computing with unums provides more accurate answers without rounding errors, underflow or overflow. In contrast to fixed-sized IEEE numbers, a variable number of bits can be used to encode unums. This all allows number with only a few significant digits or with a small dynamic range to be represented more compactly.
NASA Technical Reports Server (NTRS)
Manos, P.; Turner, L. R.
1972-01-01
Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.
An integrated circuit floating point accumulator
NASA Technical Reports Server (NTRS)
Goldsmith, T. C.
1977-01-01
Goddard Space Flight Center has developed a large scale integrated circuit (type 623) which can perform pulse counting, storage, floating point compression, and serial transmission, using a single monolithic device. Counts of 27 or 19 bits can be converted to transmitted values of 12 or 8 bits respectively. Use of the 623 has resulted in substantial savaings in weight, volume, and dollar resources on at least 11 scientific instruments to be flown on 4 NASA spacecraft. The design, construction, and application of the 623 are described.
Floating-point function generation routines for 16-bit microcomputers
NASA Technical Reports Server (NTRS)
Mackin, M. A.; Soeder, J. F.
1984-01-01
Several computer subroutines have been developed that interpolate three types of nonanalytic functions: univariate, bivariate, and map. The routines use data in floating-point form. However, because they are written for use on a 16-bit Intel 8086 system with an 8087 mathematical coprocessor, they execute as fast as routines using data in scaled integer form. Although all of the routines are written in assembly language, they have been implemented in a modular fashion so as to facilitate their use with high-level languages.
Multiple video sequences synchronization during minimally invasive surgery
NASA Astrophysics Data System (ADS)
Belhaoua, Abdelkrim; Moreau, Johan; Krebs, Alexandre; Waechter, Julien; Radoux, Jean-Pierre; Marescaux, Jacques
2016-03-01
Hybrid operating rooms are an important development in the medical ecosystem. They allow integrating, in the same procedure, the advantages of radiological imaging and surgical tools. However, one of the challenges faced by clinical engineers is to support the connectivity and interoperability of medical-electrical point-of-care devices. A system that could enable plug-and-play connectivity and interoperability for medical devices would improve patient safety, save hospitals time and money, and provide data for electronic medical records. In this paper, we propose a hardware platform dedicated to collect and synchronize multiple videos captured from medical equipment in real-time. The final objective is to integrate augmented reality technology into an operation room (OR) in order to assist the surgeon during a minimally invasive operation. To the best of our knowledge, there is no prior work dealing with hardware based video synchronization for augmented reality applications on OR. Whilst hardware synchronization methods can embed temporal value, so called timestamp, into each sequence on-the-y and require no post-processing, they require specialized hardware. However the design of our hardware is simple and generic. This approach was adopted and implemented in this work and its performance is evaluated by comparison to the start-of-the-art methods.
2012-03-01
Description A dass that handles Imming the JAUS header pmUon of JAUS messages. jaus_hmd~_msg is included as a data member in all JAUS messages. Member...scaleTolnt16 (float val, float low, float high) [related] Scales signed short value val, which is bounded by low and high. Shifts the center point of low...and high to zero, and shifts val accordingly. V a! is then up scaled by the ratio of the range of short values to the range of values from high to low
NASA Astrophysics Data System (ADS)
Hut, Rolf; Bogaard, Thom
2017-04-01
Throwing something in a river and seeing how fast it floats downstream is the first thing that every hydrologists does when encountering a new river. Using a collection of floats allows estimation of gauge surface water velocity and dispersion characteristics. To use floats over long (hundreds of kilometers) stretches of river requires either a crew that keeps an eye on the floats (labor intensive) or having high-tech floats that upload their location on regular intervals, such that they can be retrieved at the end of the experiment. GPS floats with communication units have been custom build by scientists before. Connecting GPS units to GSM modems used to require deep knowledge on micro-electronics and network protocols. In this work we present a version that is build using only off-the-shelf electronics that require no deep knowledge of either micro electronics nor network protocols. The new cellular enabled Particle Electron development board made it possible to connect a Sparkfun OpenLog (SD-card based logger) to a GPS tracker with no soldering and little programming. Because scientist can program the device themselves, settings like sample time can be adapted to the needs of specific experiments and additional sensors can be easily added. When writing GPS location every minute to SD and reporting every fifteen minutes online, our logger can run for three days on a single 2200 mAh LiPo battery (provided with the Particle Electron). Cost of components for our logger is less than 150. The durability of our GPS loggers will be tested during a field campaign at the end of January 2017 where 15 floats will float down the Irrawaddy river over a length of more than 200 km, during two days.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
libvaxdata: VAX data format conversion routines
Baker, Lawrence M.
2005-01-01
libvaxdata provides a collection of routines for converting numeric data-integer and floating-point-to and from the formats used on a Digital Equipment Corporation1 (DEC) VAX 32-bit minicomputer (Brunner, 1991). Since the VAX numeric data formats are inherited from those used on a DEC PDP-11 16-bit minicomputer, these routines can be used to convert PDP-11 data as well. VAX numeric data formats are also the default data formats used on DEC Alpha 64-bit minicomputers running OpenVMS The libvaxdata routines are callable from Fortran or C. They require that the caller use two's-complement format for integer data and IEEE 754 format (ANSI/IEEE, 1985) for floating-point data. They also require that the 'natural' size of a C int type (integer) is 32 bits. That is the case for most modern 32-bit and 64-bit computer systems. Nevertheless, you may wish to consult the Fortran or C compiler documentation on your system to be sure. Some Fortran compilers support conversion of VAX numeric data on-the-fly when reading or writing unformatted files, either as a compiler option or a run-time I/O option. This feature may be easier to use than the libvaxdata routines. Consult the Fortran compiler documentation on your system to determine if this alternative is available to you. 1Later Compaq Computer Corporation, now Hewlett-Packard Company
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2012-04-01
By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.
Control system development for a 1 MW/e/ solar thermal power plant
NASA Technical Reports Server (NTRS)
Daubert, E. R.; Bergthold, F. M., Jr.; Fulton, D. G.
1981-01-01
The point-focusing distributed receiver power plant considered consists of a number of power modules delivering power to a central collection point. Each power module contains a parabolic dish concentrator with a closed-cycle receiver/turbine/alternator assembly. Currently, a single-module prototype plant is under construction. The major control system tasks required are related to concentrator pointing control, receiver temperature control, and turbine speed control. Attention is given to operational control details, control hardware and software, and aspects of CRT output display.
Towards Batched Linear Solvers on Accelerated Hardware Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Dong, Tingzing Tim; Tomov, Stanimire
2015-01-01
As hardware evolves, an increasingly effective approach to develop energy efficient, high-performance solvers, is to design them to work on many small and independent problems. Indeed, many applications already need this functionality, especially for GPUs, which are known to be currently about four to five times more energy efficient than multicore CPUs for every floating-point operation. In this paper, we describe the development of the main one-sided factorizations: LU, QR, and Cholesky; that are needed for a set of small dense matrices to work in parallel. We refer to such algorithms as batched factorizations. Our approach is based on representingmore » the algorithms as a sequence of batched BLAS routines for GPU-contained execution. Note that this is similar in functionality to the LAPACK and the hybrid MAGMA algorithms for large-matrix factorizations. But it is different from a straightforward approach, whereby each of GPU's symmetric multiprocessors factorizes a single problem at a time. We illustrate how our performance analysis together with the profiling and tracing tools guided the development of batched factorizations to achieve up to 2-fold speedup and 3-fold better energy efficiency compared to our highly optimized batched CPU implementations based on the MKL library on a two-sockets, Intel Sandy Bridge server. Compared to a batched LU factorization featured in the NVIDIA's CUBLAS library for GPUs, we achieves up to 2.5-fold speedup on the K40 GPU.« less
Differential porosimetry and permeametry for random porous media.
Hilfer, R; Lemmer, A
2015-07-01
Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Float processing of high-temperature complex silicate glasses and float baths used for same
NASA Technical Reports Server (NTRS)
Cooper, Reid Franklin (Inventor); Cook, Glen Bennett (Inventor)
2000-01-01
A float glass process for production of high melting temperature glasses utilizes a binary metal alloy bath having the combined properties of a low melting point, low reactivity with oxygen, low vapor pressure, and minimal reactivity with the silicate glasses being formed. The metal alloy of the float medium is exothermic with a solvent metal that does not readily form an oxide. The vapor pressure of both components in the alloy is low enough to prevent deleterious vapor deposition, and there is minimal chemical and interdiffusive interaction of either component with silicate glasses under the float processing conditions. Alloys having the desired combination of properties include compositions in which gold, silver or copper is the solvent metal and silicon, germanium or tin is the solute, preferably in eutectic or near-eutectic compositions.
ICRF-Induced Changes in Floating Potential and Ion Saturation Current in the EAST Divertor
NASA Astrophysics Data System (ADS)
Perkins, Rory; Hosea, Joel; Taylor, Gary; Bertelli, Nicola; Kramer, Gerrit; Qin, Chengming; Wang, Liang; Yang, Jichan; Zhang, Xinjun
2017-10-01
Injection of waves in the ion cyclotron range of frequencies (ICRF) into a tokamak can potentially raise the plasma potential via RF rectification. Probes are affected both by changes in plasma potential and also by RF-averaging of the probe characteristic, with the latter tending to drop the floating potential. We present the effect of ICRF heating on divertor Langmuir probes in the EAST experiment. Over a scan of the outer gap, probes connected to the antennas have increases in floating potential with ICRF, but probes in between the outer-vessel strike point and flux surface tangent to the antenna have decreased floating potential. This behaviour is investigated using field-line mapping. Preliminary results show that mdiplane gas puffing can suppress the strong influence of ICRF on the probes' floating potential.
Lithium-ion drifting: Application to the study of point defects in floating-zone silicon
NASA Technical Reports Server (NTRS)
Walton, J. T.; Wong, Y. K.; Zulehner, W.
1997-01-01
The use of lithium-ion (Li(+)) drifting to study the properties of point defects in p-type Floating-Zone (FZ) silicon crystals is reported. The Li(+) drift technique is used to detect the presence of vacancy-related defects (D defects) in certain p-type FZ silicon crystals. SUPREM-IV modeling suggests that the silicon point defect diffusivities are considerably higher than those commonly accepted, but are in reasonable agreement with values recently proposed. These results demonstrate the utility of Li(+) drifting in the study of silicon point defect properties in p-type FZ crystals. Finally, a straightforward measurement of the Li(+) compensation depth is shown to yield estimates of the vacancy-related defect concentration in p-type FZ crystals.
Wang, Jun; Cui, Xiao; Ni, Huan-Huan; Huang, Chun-Shui; Zhou, Cui-Xia; Wu, Ji; Shi, Jun-Chao; Wu, Yi
2013-04-01
To compare the efficacy difference in the treatment of shoulder pain in post-stroke shoulder-hand syndrome among floating acupuncture, oral administration of western medicine and local fumigation of Chinese herbs. Ninety cases of post-stroke shoulder-hand syndrome (stage I) were randomized into a floating acupuncture group, a western medicine group and a local Chinese herbs fumigation group, 30 cases in each one. In the floating acupuncture group, two obvious tender points were detected on the shoulder and the site 80-100 mm inferior to each tender point was taken as the inserting point and stimulated with floating needling technique. In the western medicine group, mobic 7.5 mg was prescribed for oral administration. In the local Chinese herbs fumigation group, the formula for activating blood circulation and relaxing tendon was used for local fumigation. All the patients in three groups received rehabilitation training. The floating acupuncture, oral administration of western medicine, local Chinese herbs fumigation and rehabilitation training were given once a day respectively in corresponding group and the cases were observed for 1 month. The visual analogue scale (VAS) and Takagishi shoulder joint function assessment were adopted to evaluate the dynamic change of the patients with shoulder pain before and after treatment in three groups. The modified Barthel index was used to evaluate the dynamic change of daily life activity of the patients in three groups. With floating acupuncture, shoulder pain was relieved and the daily life activity was improved in the patients with post-stroke shoulder-hand syndrome, which was superior to the oral administration of western medicine and local Chinese herbs fumigation (P < 0.01). With local Chinese herbs fumigation, the improvement of shoulder pain was superior to the oral administration of western medicine. The difference in the improvement of daily life activity was not significant statistically between the local Chinese herbs fumigation and oral administration of western medicine, the efficacy was similar between these two therapies (P > 0.05). The floating acupuncture relieves shoulder pain of the patients with post-stroke shoulder-hand syndrome promptly and effectively, and the effects on shoulder pain and the improvements of daily life activity are superior to that of the oral administration of western medicine and local Chinese herbs fumigation.
Future float zone development in industry
NASA Technical Reports Server (NTRS)
Sandfort, R. M.
1980-01-01
The present industrial requirements for float zone silicon are summarized. Developments desired by the industry in the future are reported. The five most significant problems faced today by the float zone crystal growth method in industry are discussed. They are economic, large diameter, resistivity uniformity, control of carbon, and swirl defects.
40 CFR 265.1085 - Standards: Tanks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... controls shall use one of the following tanks: (1) A fixed-roof tank equipped with an internal floating... equipped with an external floating roof in accordance with the requirements specified in paragraph (f) of... controls air pollutant emissions from a tank using a fixed-roof with an internal floating roof shall meet...
Applying Scientific Principles to Resolve Student Misconceptions
ERIC Educational Resources Information Center
Yin, Yue
2012-01-01
Misconceptions about sinking and floating phenomena are some of the most challenging to overcome (Yin 2005), possibly because explaining sinking and floating requires students to understand challenging topics such as density, force, and motion. Two scientific principles are typically used in U.S. science curricula to explain sinking and floating:…
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Expert Systems on Multiprocessor Architectures. Volume 4. Technical Reports
1991-06-01
Floated-Current-Time0 -> The time that this function is called in user time uflts, expressed as a floating point number. Halt- Poligono Arrests the...default a statistics file will be printed out, if it can be. To prevent this make No-Statistics true. Unhalt- Poligono Unarrests the process in which the
76 FR 19290 - Safety Zone; Commencement Bay, Tacoma, WA
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... the following points Latitude 47[deg]17'38'' N, Longitude 122[deg]28'43'' W; thence south easterly to... protruding from the shoreline along Ruston Way. Floating markers will be placed by the sponsor of the event... rectangle protruding from the shoreline along Ruston Way. Floating markers will be placed by the sponsor of...
Technical Evaluation Motor 3 (TEM-3)
NASA Technical Reports Server (NTRS)
Garecht, Diane
1989-01-01
A primary objective of the technical evaluation motor program is to recover the case, igniter and nozzle hardware for use on the redesigned solid rocket motor flight program. Two qualification objectives were addressed and met on TEM-3. The Nylok thread locking device of the 1U100269-03 leak check port plug and the 1U52295-04 safe and arm utilizing Krytox grease on the barrier-booster shaft O-rings were both certified. All inspection and instrumentation data indicate that the TEM-3 static test firing conducted 23 May 1989 was successful. The test was conducted at ambient conditions with the exception of the field joints (set point of 121 F, with a minimum of 87 F at the sensors), igniter joint (set point at 122 F with a minimum of 87 F at sensors) and case-to-nozzle joint (set point at 114 F with a minimum of 87 F at sensors). Ballistics performance values were within specification requirements. Nozzle performance was nominal with typical erosion. The nozzle and the case joint temperatures were maintained at the heaters controlling set points while electrical power was supplied. The water and the CO2 quench systems prevented damage to the metal hardware. All other test equipment performed as planned, contributing to a successful motor firing. All indications are that the test was a success, and all expected hardware will be refurbished for the RSRM program.
Oil/gas collector/separator for underwater oil leaks
Henning, Carl D.
1993-01-01
An oil/gas collector/separator for recovery of oil leaking, for example, from an offshore or underwater oil well. The separator is floated over the point of the leak and tethered in place so as to receive oil/gas floating, or forced under pressure, toward the water surface from either a broken or leaking oil well casing, line, or sunken ship. The separator is provided with a downwardly extending skirt to contain the oil/gas which floats or is forced upward into a dome wherein the gas is separated from the oil/water, with the gas being flared (burned) at the top of the dome, and the oil is separated from water and pumped to a point of use. Since the density of oil is less than that of water it can be easily separated from any water entering the dome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, G.; Lackner, M.; Haid, L.
2013-07-01
With the push towards siting wind turbines farther offshore due to higher wind quality and less visibility, floating offshore wind turbines, which can be located in deep water, are becoming an economically attractive option. The International Electrotechnical Commission's (IEC) 61400-3 design standard covers fixed-bottom offshore wind turbines, but there are a number of new research questions that need to be answered to modify these standards so that they are applicable to floating wind turbines. One issue is the appropriate simulation length needed for floating turbines. This paper will discuss the results from a study assessing the impact of simulation lengthmore » on the ultimate and fatigue loads of the structure, and will address uncertainties associated with changing the simulation length for the analyzed floating platform. Recommendations of required simulation length based on load uncertainty will be made and compared to current simulation length requirements.« less
1986-09-01
source of the module/system. Source options are; battery, gas, cartridge, valve , and miscellaneous costs. NAMELIST OPERAT is used to compile the...hardware costs allocated to transportation for packing. TF1 = Initial transportation factor. WEIGHT = Shipping weight of total system. XSUM = System float...CD(6,I)+CD(9,I). . AROC(7,I) - Replenishment spares by year. CD(4,I) - Valve replacement cost by year. CD(5,I) = Cartridge replacement cost by year
Y-MP floating point and Cholesky factorization
NASA Technical Reports Server (NTRS)
Carter, Russell
1991-01-01
The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.
Paranoia.Ada: Sample output reports
NASA Technical Reports Server (NTRS)
1986-01-01
Paranoia.Ada is a program to diagnose floating point arithmetic in the context of the Ada programming language. The program evaluates the quality of a floating point arithmetic implementation with respect to the proposed IEEE Standards P754 and P854. Paranoia.Ada is derived from the original BASIC programming language version of Paranoia. The Paranoia.Ada replicates in Ada the test algorithms originally implemented in BASIC and adheres to the evaluation criteria established by W. M. Kahan. Paranoia.Ada incorporates a major structural redesign and employs applicable Ada architectural and stylistic features.
NASA Technical Reports Server (NTRS)
Irvine, R.; Van Alstine, R.
1979-01-01
The paper compares and describes the advantages of dry tuned gyros over floated gyros for space applications. Attention is given to describing the Teledyne SDG-5 gyro and the second-generation NASA Standard Dry Rotor Inertial Reference Unit (DRIRU II). Certain tests which were conducted to evaluate the SDG-5 and DRIRU II for specific mission requirements are outlined, and their results are compared with published test results on other gyro types. Performance advantages are highlighted.
1987-02-01
landmark set, and for computing a plan as an ordered list of of recursively executable sub-goals. The key to the search is to use the landmark database...Directed Object Extraction Using a Combined Region and Line Repretrentation, /Voc. of the Workshop on Computer Vision: Representation and Con... computational capability as well, such as the floating point calculations as required in this application . One such PE design which made effort to meet these
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
NASA Astrophysics Data System (ADS)
Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.
2017-09-01
Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.
Fan Database and Web-tool for Choosing Quieter Spaceflight Fans
NASA Technical Reports Server (NTRS)
Allen, Christopher S.; Burnside, Nathan J.
2007-01-01
One critical aspect of designing spaceflight hardware is the selection of fans to provide the necessary cooling. And with efforts to minimize cost and the tendancy to be conservative with the amount of cooling provided, it is easy to choose an overpowered fan. One impact of this is that the fan uses more energy than is necessary. But, the more significant impact is that the hardware produces much more acoustic noise than if an optimal fan was chosen. Choosing the right fan for a specific hardware application is no simple task. It requires knowledge of cooling requirements and various fan performance characteristics as well as knowledge of the aerodynamic losses of the hardware in which the fan is to be installed. Knowledge of the acoustic emissions of each fan as a function of operating condition is also required in order to choose a quieter fan for a given design point. The purpose of this paper is to describe a database and design-tool that have been developed to aid spaceflight hardware developers in choosing a fan for their application that is based on aerodynamic performance and reduced acoustic emissions as well. This web-based-tool provides a limited amount of fan-data, provides a method for selecting a fan based on its projected operating point, and also provides a method for comparing and contrasting aerodynamic performance and acoustic data from different fans. Drill-down techniques are used to display details of the spectral noise characteristics of the fan at specific operation conditions. The fan aerodynamic and acoustic data were acquired at Ames Research Center in the Experimental Aero-Physics Branch's Anechoic Chamber. Acoustic data were acquired according to ANSI Standard S12.11-1987, "Method for the Measurement of Noise Emitted by Small Air-Moving Devices." One significant improvement made to this technique included automation that allows for a significant increase in flow-rate resolution. The web-tool was developed at Johnson Space Center and is based on the web-development application, SEQUEL, which includes graphics and drill-down capabilities. This paper will describe the type and amount of data taken for the fans and will give examples of this data. This paper will also describe the data-tool and gives examples of how it can be used to choose quieter fans for use in spaceflight hardware.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... $100 million public float requirement to those companies, unless there is only very limited trading...-held shares (``public float'') of $40 million at the time of listing. All other companies must have a... value of the company's offering to demonstrate the company's compliance with the applicable public float...
17 CFR 50.4 - Classes of swaps required to be cleared.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Fixed-to-floating swap class Currency U.S. dollar (USD) Euro (EUR) Sterling (GBP) Yen (JPY). Floating.... Conditional Notional Amounts No No No No. Specification Basis swap class Currency U.S. dollar (USD) Euro (EUR... agreement class Currency U.S. dollar (USD) Euro (EUR) Sterling (GBP) Yen (JPY). Floating Rate Indexes LIBOR...
Comparison between multi-constellation ambiguity-fixed PPP and RTK for maritime precise navigation
NASA Astrophysics Data System (ADS)
Tegedor, Javier; Liu, Xianglin; Ørpen, Ole; Treffers, Niels; Goode, Matthew; Øvstedal, Ola
2015-06-01
In order to achieve high-accuracy positioning, either Real-Time Kinematic (RTK) or Precise Point Positioning (PPP) techniques can be used. While RTK normally delivers higher accuracy with shorter convergence times, PPP has been an attractive technology for maritime applications, as it delivers uniform positioning performance without the direct need of a nearby reference station. Traditional PPP has been based on ambiguity-float solutions using GPS and Glonass constellations. However, the addition of new satellite systems, such as Galileo and BeiDou, and the possibility of fixing integer carrier-phase ambiguities (PPP-AR) allow to increase PPP accuracy. In this article, a performance assessment has been done between RTK, PPP and PPP-AR, using GNSS data collected from two antennas installed on a ferry navigating in Oslo (Norway). RTK solutions have been generated using short, medium and long baselines (up to 290 km). For the generation of PPP-AR solutions, Uncalibrated Hardware Delays (UHDs) for GPS, Galileo and BeiDou have been estimated using reference stations in Oslo and Onsala. The performance of RTK and multi-constellation PPP and PPP-AR are presented.
Livermore Compiler Analysis Loop Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, R. D.
2013-03-01
LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less
Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q
2014-04-01
Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.
Does size and buoyancy affect the long-distance transport of floating debris?
NASA Astrophysics Data System (ADS)
Ryan, Peter G.
2015-08-01
Floating persistent debris, primarily made from plastic, disperses long distances from source areas and accumulates in oceanic gyres. However, biofouling can increase the density of debris items to the point where they sink. Buoyancy is related to item volume, whereas fouling is related to surface area, so small items (which have high surface area to volume ratios) should start to sink sooner than large items. Empirical observations off South Africa support this prediction: moving offshore from coastal source areas there is an increase in the size of floating debris, an increase in the proportion of highly buoyant items (e.g. sealed bottles, floats and foamed plastics), and a decrease in the proportion of thin items such as plastic bags and flexible packaging which have high surface area to volume ratios. Size-specific sedimentation rates may be one reason for the apparent paucity of small plastic items floating in the world’s oceans.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
NASA Astrophysics Data System (ADS)
Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue
2012-10-01
The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.
33 CFR 162.130 - Connecting waters from Lake Huron to Lake Erie; general rules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... vessel astern, alongside, or by pushing ahead; and (iii) Each dredge and floating plant. (4) The traffic... towing another vessel astern, alongside or by pushing ahead; and (iv) Each dredge and floating plant. (c... Captain of the Port of Detroit, Michigan. Detroit River means the connecting waters from Windmill Point...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-10
...]37[min]10.0[sec] W; thence easterly along the Marinette Marine Corporation pier to the point of origin. The restricted area will be marked by a lighted and signed floating boat barrier. (b) The... floating boat barrier without permission from the United States Navy, Supervisor of Shipbuilding Gulf Coast...
Flight performance of Skylab attitude and pointing control system
NASA Technical Reports Server (NTRS)
Chubb, W. B.; Kennel, H. F.; Rupp, C. C.; Seltzer, S. M.
1975-01-01
The Skylab attitude and pointing control system (APCS) requirements are briefly reviewed and the way in which they became altered during the prelaunch phase of development is noted. The actual flight mission (including mission alterations during flight) is described. The serious hardware failures that occurred, beginning during ascent through the atmosphere, also are described. The APCS's ability to overcome these failures and meet mission changes are presented. The large around-the-clock support effort on the ground is discussed. Salient design points and software flexibility that should afford pertinent experience for future spacecraft attitude and pointing control system designs are included.
Optical communication for space missions
NASA Technical Reports Server (NTRS)
Firtmaurice, M.
1991-01-01
Activities performed at NASA/GSFC (Goddard Space Flight Center) related to direct detection optical communications for space applications are discussed. The following subject areas are covered: (1) requirements for optical communication systems (data rates and channel quality; spatial acquisition; fine tracking and pointing; and transmit point-ahead correction); (2) component testing and development (laser diodes performance characterization and life testing; and laser diode power combining); (3) system development and simulations (The GSFC pointing, acquisition and tracking system; hardware description; preliminary performance analysis; and high data rate transmitter/receiver systems); and (4) proposed flight demonstration of optical communications.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
A Floating Cylinder on an Unbounded Bath
NASA Astrophysics Data System (ADS)
Chen, Hanzhe; Siegel, David
2018-03-01
In this paper, we reconsider a circular cylinder horizontally floating on an unbounded reservoir in a gravitational field directed downwards, which was studied by Bhatnagar and Finn (Phys Fluids 18(4):047103, 2006). We follow their approach but with some modifications. We establish the relation between the total energy E_T relative to the undisturbed state and the total force F_T , that is, F_T = -dE_T/dh , where h is the height of the center of the cylinder relative to the undisturbed fluid level. There is a monotone relation between h and the wetting angle φ _0 . We study the number of equilibria, the floating configurations and their stability for all parameter values. We find that the system admits at most two equilibrium points for arbitrary contact angle γ , the one with smaller φ _0 is stable and the one with larger φ _0 is unstable. Since the one-sided solution can be translated horizontally, the fluid interfaces may intersect. We show that the stable equilibrium point never lies in the intersection region, while the unstable equilibrium point may lie in the intersection region.
40 CFR 63.120 - Storage vessel provisions-procedures to determine compliance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... with § 63.119(b) of this subpart (storage vessel equipped with a fixed roof and internal floating roof) or with § 63.119(d) of this subpart (storage vessel equipped with an external floating roof converted to an internal floating roof), the owner or operator shall comply with the requirements in paragraphs...
40 CFR 60.693-2 - Alternative standards for oil-water separators.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-water separators. (a) An owner or operator may elect to construct and operate a floating roof on an oil... requirements of this subpart which meets the following specifications. (1) Each floating roof shall be equipped... the liquid between the wall of the separator and the floating roof. A mechanical shoe seal means a...
Time assignment system and its performance aboard the Hitomi satellite
NASA Astrophysics Data System (ADS)
Terada, Yukikatsu; Yamaguchi, Sunao; Sugimoto, Shigenobu; Inoue, Taku; Nakaya, Souhei; Murakami, Maika; Yabe, Seiya; Oshimizu, Kenya; Ogawa, Mina; Dotani, Tadayasu; Ishisaki, Yoshitaka; Mizushima, Kazuyo; Kominato, Takashi; Mine, Hiroaki; Hihara, Hiroki; Iwase, Kaori; Kouzu, Tomomi; Tashiro, Makoto S.; Natsukari, Chikara; Ozaki, Masanobu; Kokubun, Motohide; Takahashi, Tadayuki; Kawakami, Satoko; Kasahara, Masaru; Kumagai, Susumu; Angelini, Lorella; Witthoeft, Michael
2018-01-01
Fast timing capability in x-ray observation of astrophysical objects is one of the key properties for the ASTRO-H (Hitomi) mission. Absolute timing accuracies of 350 or 35 μs are required to achieve nominal scientific goals or to study fast variabilities of specific sources. The satellite carries a GPS receiver to obtain accurate time information, which is distributed from the central onboard computer through the large and complex SpaceWire network. The details of the time system on the hardware and software design are described. In the distribution of the time information, the propagation delays and jitters affect the timing accuracy. Six other items identified within the timing system will also contribute to absolute time error. These error items have been measured and checked on ground to ensure the time error budgets meet the mission requirements. The overall timing performance in combination with hardware performance, software algorithm, and the orbital determination accuracies, etc. under nominal conditions satisfies the mission requirements of 35 μs. This work demonstrates key points for space-use instruments in hardware and software designs and calibration measurements for fine timing accuracy on the order of microseconds for midsized satellites using the SpaceWire (IEEE1355) network.
Attitude Control Subsystem for the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Hewston, Alan W.; Mitchell, Kent A.; Sawicki, Jerzy T.
1996-01-01
This paper provides an overview of the on-orbit operation of the Attitude Control Subsystem (ACS) for the Advanced Communications Technology Satellite (ACTS). The three ACTS control axes are defined, including the means for sensing attitude and determining the pointing errors. The desired pointing requirements for various modes of control as well as the disturbance torques that oppose the control are identified. Finally, the hardware actuators and control loops utilized to reduce the attitude error are described.
A test data compression scheme based on irrational numbers stored coding.
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
A computer-aided telescope pointing system utilizing a video star tracker
NASA Technical Reports Server (NTRS)
Murphy, J. P.; Lorell, K. R.; Swift, C. D.
1975-01-01
The Video Inertial Pointing (VIP) System developed to satisfy the acquisition and pointing requirements of astronomical telescopes is described. A unique feature of the system is the use of a single sensor to provide information for the generation of three axis pointing error signals and for a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization and the CRT display is used by an operator to facilitate target acquisition and to aid in manual positioning of the telescope optical axis. A model of the system using a low light level vidicon built and flown on a balloon-borne infrared telescope is briefly described from a state of the art charge coupled device (CCD) sensor. The advanced system hardware is described and an analysis of the multi-star tracking and three axis error signal generation, along with an analysis and design of the gyro update filter, are presented. Results of a hybrid simulation are described in which the advanced VIP system hardware is driven by a digital simulation of the star field/CCD sensor and an analog simulation of the telescope and gyro stabilization dynamics.
Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.
Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios
2016-01-01
The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language (VHDL) and validated using an Field-Programmable Gate Array (FPGA) implementation.
26 CFR 1.1274-2 - Issue price of debt instruments to which section 1274 applies.
Code of Federal Regulations, 2010 CFR
2010-04-01
...- borrower to the seller-lender that is designated as interest or points. See Example 2 of § 1.1273-2(g)(5... ignored. (f) Treatment of variable rate debt instruments—(1) Stated interest at a qualified floating rate... qualified floating rate (or rates) is determined by assuming that the instrument provides for a fixed rate...
Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks
NASA Astrophysics Data System (ADS)
Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji
High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.
Shahan, M R; Seaman, C E; Beck, T W; Colinet, J F; Mischler, S E
2017-09-01
Float coal dust is produced by various mining methods, carried by ventilating air and deposited on the floor, roof and ribs of mine airways. If deposited, float dust is re-entrained during a methane explosion. Without sufficient inert rock dust quantities, this float coal dust can propagate an explosion throughout mining entries. Consequently, controlling float coal dust is of critical interest to mining operations. Rock dusting, which is the adding of inert material to airway surfaces, is the main control technique currently used by the coal mining industry to reduce the float coal dust explosion hazard. To assist the industry in reducing this hazard, the Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health initiated a project to investigate methods and technologies to reduce float coal dust in underground coal mines through prevention, capture and suppression prior to deposition. Field characterization studies were performed to determine quantitatively the sources, types and amounts of dust produced during various coal mining processes. The operations chosen for study were a continuous miner section, a longwall section and a coal-handling facility. For each of these operations, the primary dust sources were confirmed to be the continuous mining machine, longwall shearer and conveyor belt transfer points, respectively. Respirable and total airborne float dust samples were collected and analyzed for each operation, and the ratio of total airborne float coal dust to respirable dust was calculated. During the continuous mining process, the ratio of total airborne float coal dust to respirable dust ranged from 10.3 to 13.8. The ratios measured on the longwall face were between 18.5 and 21.5. The total airborne float coal dust to respirable dust ratio observed during belt transport ranged between 7.5 and 21.8.
Zhai, H; Jones, D S; McCoy, C P; Madi, A M; Tian, Y; Andrews, G P
2014-10-06
The objective of this work was to investigate the feasibility of using a novel granulation technique, namely, fluidized hot melt granulation (FHMG), to prepare gastroretentive extended-release floating granules. In this study we have utilized FHMG, a solvent free process in which granulation is achieved with the aid of low melting point materials, using Compritol 888 ATO and Gelucire 50/13 as meltable binders, in place of conventional liquid binders. The physicochemical properties, morphology, floating properties, and drug release of the manufactured granules were investigated. Granules prepared by this method were spherical in shape and showed good flowability. The floating granules exhibited sustained release exceeding 10 h. Granule buoyancy (floating time and strength) and drug release properties were significantly influenced by formulation variables such as excipient type and concentration, and the physical characteristics (particle size, hydrophilicity) of the excipients. Drug release rate was increased by increasing the concentration of hydroxypropyl cellulose (HPC) and Gelucire 50/13, or by decreasing the particle size of HPC. Floating strength was improved through the incorporation of sodium bicarbonate and citric acid. Furthermore, floating strength was influenced by the concentration of HPC within the formulation. Granules prepared in this way show good physical characteristics, floating ability, and drug release properties when placed in simulated gastric fluid. Moreover, the drug release and floating properties can be controlled by modification of the ratio or physical characteristics of the excipients used in the formulation.
26 CFR 1.483-2 - Unstated interest.
Code of Federal Regulations, 2010 CFR
2010-04-01
... percentage points above the yield on 6-month Treasury bills at the mid-point of the semiannual period immediately preceding each interest payment date. Assume that the interest rate is a qualified floating rate...
Low-complexity object detection with deep convolutional neural network for embedded systems
NASA Astrophysics Data System (ADS)
Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong
2017-09-01
We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.
A preliminary study of molecular dynamics on reconfigurable computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolinski, C.; Trouw, F. R.; Gokhale, M.
2003-01-01
In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less
Extending the BEAGLE library to a multi-FPGA platform.
Jin, Zheming; Bakos, Jason D
2013-01-19
Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.
You, Xiangwei; Xing, Zhuokan; Liu, Fengmao; Zhang, Xu
2015-05-22
A novel air assisted liquid-liquid microextraction using the solidification of a floating organic droplet method (AALLME-SFO) was developed for the rapid and simple determination of seven fungicide residues in juice samples, using the gas chromatography with electron capture detector (GC-ECD). This method combines the advantages of AALLME and dispersive liquid-liquid microextraction based on the solidification of floating organic droplets (DLLME-SFO) for the first time. In this method, a low-density solvent with a melting point near room temperature was used as the extraction solvent, and the emulsion was rapidly formed by pulling in and pushing out the mixture of aqueous sample solution and extraction solvent for ten times repeatedly using a 10-mL glass syringe. After centrifugation, the extractant droplet could be easily collected from the top of the aqueous samples by solidifying it at a temperature lower than the melting point. Under the optimized conditions, good linearities with the correlation coefficients (γ) higher than 0.9959 were obtained and the limits of detection (LOD) varied between 0.02 and 0.25 μgL(-1). The proposed method was applied to determine the target fungicides in juice samples and acceptable recoveries ranged from 72.6% to 114.0% with the relative standard deviations (RSDs) of 2.3-13.0% were achieved. Compared with the conventional DLLME method, the newly proposed method will neither require a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin
2015-02-01
There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
The control of float zone interfaces by the use of selected boundary conditions
NASA Technical Reports Server (NTRS)
Foster, L. M.; Mcintosh, J.
1983-01-01
The main goal of the float zone crystal growth project of NASA's Materials Processing in Space Program is to thoroughly understand the molten zone/freezing crystal system and all the mechanisms that govern this system. The surface boundary conditions required to give flat float zone solid melt interfaces were studied and computed. The results provide float zone furnace designers with better methods for controlling solid melt interface shapes and for computing thermal profiles and gradients. Documentation and a user's guide were provided for the computer software.
Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.
Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan
2017-08-15
Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.
Harada, Ichiro; Kim, Sung-Gon; Cho, Chong Su; Kurosawa, Hisashi; Akaike, Toshihiro
2007-01-01
In this study, a simple combined method consisting of floating and anchored collagen gel in a ligament or tendon equivalent culture system was used to produce the oriented fibrils in fibroblast-populated collagen matrices (FPCMs) during the remodeling and contraction of the collagen gel. Orientation of the collagen fibrils along single axis occurred over the whole area of the floating section and most of the fibroblasts were elongated and aligned along the oriented collagen fibrils, whereas no significant orientation of fibrils was observed in normally contracted FPCMs by the floating method. Higher elasticity and enhanced mechanical strength were obtained using our simple method compared with normally contracted floating FPCMs. The Young's modulus and the breaking point of the FPCMs were dependent on the initial cell densities. This simple method will be applied as a convenient bioreactor to study cellular processes of the fibroblasts in the tissues with highly oriented fibrils such as ligaments or tendons. (c) 2006 Wiley Periodicals, Inc.
LIBRA: An inexpensive geodetic network densification system
NASA Technical Reports Server (NTRS)
Fliegel, H. F.; Gantsweg, M.; Callahan, P. S.
1975-01-01
A description is given of the Libra (Locations Interposed by Ranging Aircraft) system, by which geodesy and earth strain measurements can be performed rapidly and inexpensively to several hundred auxiliary points with respect to a few fundamental control points established by any other technique, such as radio interferometry or satellite ranging. This low-cost means of extending the accuracy of space age geodesy to local surveys provides speed and spatial resolution useful, for example, for earthquake hazards estimation. Libra may be combined with an existing system, Aries (Astronomical Radio Interferometric Earth Surveying) to provide a balanced system adequate to meet the geophysical needs, and applicable to conventional surveying. The basic hardware design was outlined and specifications were defined. Then need for network densification was described. The following activities required to implement the proposed Libra system are also described: hardware development, data reduction, tropospheric calibrations, schedule of development and estimated costs.
Tarte, Stephen R.; Schmidt, A.R.; Sullivan, Daniel J.
1992-01-01
A floating sample-collection platform is described for stream sites where the vertical or horizontal distance between the stream-sampling point and a safe location for the sampler exceed the suction head of the sampler. The platform allows continuous water sampling over the entire storm-runoff hydrogrpah. The platform was developed for a site in southern Illinois.
Floating assembly of diatom Coscinodiscus sp. microshells.
Wang, Yu; Pan, Junfeng; Cai, Jun; Zhang, Deyuan
2012-03-30
Diatoms have silica frustules with transparent and delicate micro/nano scale structures, two dimensional pore arrays, and large surface areas. Although, the diatom cells of Coscinodiscus sp. live underwater, we found that their valves can float on water and assemble together. Experiments show that the convex shape and the 40 nm sieve pores of the valves allow them to float on water, and that the buoyancy and the micro-range attractive forces cause the valves to assemble together at the highest point of water. As measured by AFM calibrated glass needles fixed in manipulator, the buoyancy force on a single floating valve may reach up to 10 μN in water. Turning the valves over, enlarging the sieve pores, reducing the surface tension of water, or vacuum pumping may cause the floating valves to sink. After the water has evaporated, the floating valves remained in their assembled state and formed a monolayer film. The bonded diatom monolayer may be valuable in studies on diatom based optical devices, biosensors, solar cells, and batteries, to better use the optical and adsorption properties of frustules. The floating assembly phenomenon can also be used as a self-assembly method for fabricating monolayer of circular plates. Copyright © 2012 Elsevier Inc. All rights reserved.
Wang, Ji-Wei; Cui, Zhi-Ting; Cui, Hong-Wei; Wei, Chang-Nian; Harada, Koichi; Minamoto, Keiko; Ueda, Kimiyo; Ingle, Kapilkumar N; Zhang, Cheng-Gang; Ueda, Atsushi
2010-12-01
The floating population refers to the large and increasing number of migrants without local household registration status and has become a new demographic phenomenon in China. Most of these migrants move from the rural areas of the central and western parts of China to the eastern and coastal metropolitan areas in pursuit of a better life. The floating population of China was composed of 121 million people in 2000, and this number was expected to increase to 300 million by 2010. Quality of life (QOL) studies of the floating population could provide a critical starting point for recognizing the potential of regions, cities and local communities to improve QOL. This study explored the construct of QOL of the floating population in Shanghai, China. We conducted eight focus groups with 58 members of the floating population (24 males and 34 females) and then performed a qualitative thematic analysis of the interviews. The following five QOL domains were identified from the analysis: personal development, jobs and career, family life, social relationships and social security. The results indicated that stigma and discrimination permeate these life domains and influence the framing of life expectations. Proposals were made for reducing stigma and discrimination against the floating population to improve the QOL of this population.
Charles J. Gatchell; Charles J. Gatchell
1991-01-01
Gang-ripping technology that uses a movable (floating) outer blade to eliminate unusable edgings is described, including new tenn1nology for identifying preferred and minimally acceptable strip widths. Because of the large amount of salvage required to achieve total yields, floating blade gang ripping is not recommended for boards with crook. With crook removed by...
Multichannel seismic/oceanographic/biological monitoring of the oceans
NASA Astrophysics Data System (ADS)
Hello, Y.; Leymarie, E.; Ogé, A.; Poteau, A.; Argentino, J.; Sukhovich, A.; Claustre, H.; Nolet, G.
2011-12-01
Delays in seismic P wave are used to make scans or 3D images of the variations in seismic wave speed in the Earth's interior using the techniques of seismic tomography. Observations of such delays are ubiquitous on the continents but rare in oceanic regions. Free-drifting profiling floats that measure the temperature, salinity and current of the upper 2000 m of the ocean are used by physical oceanographers for continuous monitoring in the Argo program. Recently, seismologists developed the idea to use such floats in order to compensate for the lack of seismic delay observations, especially in the southern hemisphere. In project Globalseis, financed by a grant from the European Research Council (ERC), we have developed and tested a prototype of such a seismological sensor using an Apex float from Teledyne Webb Research, a Rafos hydrophone, and electronics developed in collaboration with Osean, a small engineering firm in France. `MERMAID', for `Mobile Earthquake Recorder in Marine Areas by Independent Divers' is approaching its final design and should become available off the shelf in 2012. In the meantime we initiated a collaboration between Globalseis and another ERC project, remOcean, for the acquisition of radiometric, bio-geochemical data and meteorological observations in addition to salinity and temperature (Bio-Argo program). In this collaboration of Geoazur and LOV (Laboratoire d'Océanologie de Villefranche sur mer), two laboratories located at the Observatory of Villefranche, we developed a multichannel acquisition hardware electronics called 'PAYLOAD' that allows commercial floats such as Apex (TWR) and Provor (NKE) to serve multiple observing missions simultaneously. Based on an algorithm using wavelet transforms PAYLOAD continuously analyzes acoustic signals to detect major seismic events and weather phenomena such rain, drizzle, open sea and ice during drift diving phase. The bio-geochemical and other parameters are recorded and analyzed during ascent. All data are transmitted using the Iridum satellite network in Rudics mode when the floats surface. Two-way communication with Iridium allows us to send new parameters to the float for its next mission. Dual project campaigns are envisaged for next year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less
A superconducting large-angle magnetic suspension
NASA Technical Reports Server (NTRS)
Downer, James R.; Anastas, George V., Jr.; Bushko, Dariusz A.; Flynn, Frederick J.; Goldie, James H.; Gondhalekar, Vijay; Hawkey, Timothy J.; Hockney, Richard L.; Torti, Richard P.
1992-01-01
SatCon Technology Corporation has completed a Small Business Innovation Research (SBIR) Phase 2 program to develop a Superconducting Large-Angle Magnetic Suspension (LAMS) for the NASA Langley Research Center. The Superconducting LAMS was a hardware demonstration of the control technology required to develop an advanced momentum exchange effector. The Phase 2 research was directed toward the demonstration for the key technology required for the advanced concept CMG, the controller. The Phase 2 hardware consists of a superconducting solenoid ('source coils') suspended within an array of nonsuperconducting coils ('control coils'), a five-degree-of-freedom positioning sensing system, switching power amplifiers, and a digital control system. The results demonstrated the feasibility of suspending the source coil. Gimballing (pointing the axis of the source coil) was demonstrated over a limited range. With further development of the rotation sensing system, enhanced angular freedom should be possible.
A superconducting large-angle magnetic suspension
NASA Astrophysics Data System (ADS)
Downer, James R.; Anastas, George V., Jr.; Bushko, Dariusz A.; Flynn, Frederick J.; Goldie, James H.; Gondhalekar, Vijay; Hawkey, Timothy J.; Hockney, Richard L.; Torti, Richard P.
1992-12-01
SatCon Technology Corporation has completed a Small Business Innovation Research (SBIR) Phase 2 program to develop a Superconducting Large-Angle Magnetic Suspension (LAMS) for the NASA Langley Research Center. The Superconducting LAMS was a hardware demonstration of the control technology required to develop an advanced momentum exchange effector. The Phase 2 research was directed toward the demonstration for the key technology required for the advanced concept CMG, the controller. The Phase 2 hardware consists of a superconducting solenoid ('source coils') suspended within an array of nonsuperconducting coils ('control coils'), a five-degree-of-freedom positioning sensing system, switching power amplifiers, and a digital control system. The results demonstrated the feasibility of suspending the source coil. Gimballing (pointing the axis of the source coil) was demonstrated over a limited range. With further development of the rotation sensing system, enhanced angular freedom should be possible.
Planning a Computer Lab: Considerations To Ensure Success.
ERIC Educational Resources Information Center
IALL Journal of Language Learning Technologies, 1994
1994-01-01
Presents points to consider when organizing a computer laboratory. These include the lab's overall objectives and how best to meet them; what type of students will use the lab; where the lab will be located; and what software and hardware can best meet the lab's overall objectives, population, and location requirements. Other factors include time,…
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Everton, Eric L.
1990-01-01
An interactive grid adaption method is developed, discussed and applied to the unsteady flow about an oscillating airfoil. The user is allowed to have direct interaction with the adaption of the grid as well as the solution procedure. Grid points are allowed to adapt simultaneously to several variables. In addition to the theory and results, the hardware and software requirements are discussed.
Determinant Computation on the GPU using the Condensation Method
NASA Astrophysics Data System (ADS)
Anisul Haque, Sardar; Moreno Maza, Marc
2012-02-01
We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.
Investigation of Springing Responses on the Great Lakes Ore Carrier M/V STEWART J. CORT
1980-12-01
175k tons.6 Using these values one can write : JL@APBD - ACTflALIVIRTVAL (MALAST) (4.) BeALLAST &VAC TUAL U(L@ADN@) and 0.94 10 The shifting of theI’M...will have to write a routine to convert the floating-point num- bers into the other machine’s internal floating-point format. The CCI record is again...THE RESULTS AND WRITES W1l TO THE LINE PRINTER. C IT ALSO PUTS THE RESUL~rs IN A DISA FIL1E .C C WRITTEN BY JCD3 NOVEMBER 1970f C C C
A floating-point/multiple-precision processor for airborne applications
NASA Technical Reports Server (NTRS)
Yee, R.
1982-01-01
A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
NASA Astrophysics Data System (ADS)
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments
NASA Technical Reports Server (NTRS)
Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi
1994-01-01
Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.
Code of Federal Regulations, 2010 CFR
2010-07-01
... and operate each internal and external floating roof gasoline storage tank according to the applicable... (b) Equip each internal floating roof gasoline storage tank according to the requirements in § 60... the requirements in § 60.112b(a)(1)(iv) through (ix) of this chapter; and (c) Equip each external...
Deflection of Resilient Materials for Reduction of Floor Impact Sound
Lee, Jung-Yoon; Kim, Jong-Mun
2014-01-01
Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor. PMID:25574491
Deflection of resilient materials for reduction of floor impact sound.
Lee, Jung-Yoon; Kim, Jong-Mun
2014-01-01
Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor.
Microcontroller uses in Long-Duration Ballooning
NASA Astrophysics Data System (ADS)
Jones, Joseph
This paper discusses how microcontrollers are being utilized to fulfill the demands of long duration ballooning (LDB) and the advantages of doing so. The Columbia Scientific Balloon Facility (CSBF) offers the service of launching high altitude balloons (120k ft) which provide an over the horizon telemetry system and platform for scientific research payloads to collect data. CSBF has utilized microcontrollers to address multiple tasks and functions which were previously performed by more complex systems. A microcontroller system has been recently developed and programmed in house to replace our previous backup navigation system which is used on all LDB flights. A similar microcontroller system was developed to be independently launched in Antarctica before the actual scientific payload. This system's function is to transmit its GPS position and a small housekeeping packet so that we can confirm the upper level float winds are as predicted from satellite derived models. Microcontrollers have also been used to create test equipment to functionally check out the flight hardware used in our telemetry systems. One test system which was developed can be used to quickly determine if our communication link we are providing for the science payloads is functioning properly. Another system was developed to provide us with the ability to easily determine the status of one of our over the horizon communication links through a closed loop system. This test system has given us the capability to provide more field support to science groups than we were able to in years past. The trend of utilizing microcontrollers has taken place for a number of reasons. By using microcontrollers to fill these needs, it has given us the ability to quickly design and implement systems which meet flight critical needs, as well as perform many of the everyday tasks in LDB. This route has also allowed us to reduce the amount of time required for personnel to perform a number of the tasks required during the initial fabrication and also refurbishing processes of flight hardware systems. The recent use of microcontrollers in the design of both LDB flight hardware and test equipment has shown some examples of the adaptability and usefulness they have provided for our workplace.
Orbital operations with the Shuttle Infrared Telescope Facility /SIRTF/
NASA Technical Reports Server (NTRS)
Werner, M. W.; Lorell, K. R.
1981-01-01
The Shuttle Infrared Telescope Facility (SIRTF) is a cryogenically-cooled, 1-m-class telescope that will be operated from the Space Shuttle as an observatory for infrared astronomy. This paper discusses the scientific constraints on and the requirements for pointing and controlling SIRTF as well as several aspects of SIRTF orbital operations. The basic pointing requirement is for an rms stability of 0.25 arcsec, which is necessary to realize the full angular resolution of the 5-micron diffraction-limited SIRTF. Achieving this stability requires the use of hardware and software integral to SIRTF working interactively with the gyrostabilized Shuttle pointing-mount. The higher sensitivity of SIRTF, together with orbital and time constraints, puts a premium on rapid target acquisition and on efficient operational and observational procedures. Several possible acquisition modes are discussed, and the importance of source acquisition by maximizing the output of an infrared detector is emphasized.
33 CFR 110.127b - Flaming Gorge Lake, Wyoming-Utah.
Code of Federal Regulations, 2010 CFR
2010-07-01
... launching ramp to a point beyond the floating breakwater and then westerly, as established by the... following points, excluding a 150-foot-wide fairway, extending southeasterly from the launching ramp, as... inclosed by the shore and a line connecting the following points, excluding a 100-foot-wide fairway...
NASA Technical Reports Server (NTRS)
Weick, Fred E; Harris, Thomas A
1933-01-01
Discussed here are a series of systematic tests being conducted to compare different lateral control devices with particular reference to their effectiveness at high angles of attack. The present tests were made with six different forms of floating tip ailerons of symmetrical section. The tests showed the effect of the various ailerons on the general performance characteristics of the wing, and on the lateral controllability and stability characteristics. In addition, the hinge moments were measured for the most interesting cases. The results are compared with those for a rectangular wing with ordinary ailerons and also with those for a rectangular wing having full-chord floating tip ailerons. Practically all the floating tip ailerons gave satisfactory rolling moments at all angles of attack and at the same time gave no adverse yawing moments of appreciable magnitude. The general performance characteristics with the floating tip ailerons, however, were relatively poor, especially the rate of climb. None of the floating tip ailerons entirely eliminated the auto rotational moments at angles of attack above the stall, but all of them gave lower moments than a plain wing. Some of the floating ailerons fluttered if given sufficiently large deflection, but this could have been eliminated by moving the hinge axis of the ailerons forward. Considering all points including hinge moments, the floating tip ailerons on the wing with 5:1 taper are probably the best of those which were tested.
R Jivani, Rishad; N Patel, Chhagan; M Patel, Dashrath; P Jivani, Nurudin
2010-01-01
The present study deals with development of a floating in-situ gel of the narrow absorption window drug baclofen. Sodium alginate-based in-situ gelling systems were prepared by dissolving various concentrations of sodium alginate in deionized water, to which varying concentrations of drug and calcium bicarbonate were added. Fourier transform infrared spectroscopy (FTIR) and differential scanning calorimetry (DSC) were used to check the presence of any interaction between the drug and the excipients. A 3(2) full factorial design was used for optimization. The concentrations of sodium alginate (X1) and calcium bicarbonate (X2) were selected as the independent variables. The amount of the drug released after 1 h (Q1) and 10 h (Q10) and the viscosity of the solution were selected as the dependent variables. The gels were studied for their viscosity, in-vitro buoyancy and drug release. Contour plots were drawn for each dependent variable and check-point batches were prepared in order to get desirable release profiles. The drug release profiles were fitted into different kinetic models. The floating lag time and floating time found to be 2 min and 12 h respectively. A decreasing trend in drug release was observed with increasing concentrations of CaCO3. The computed values of Q1 and Q10 for the check-point batch were 25% and 86% respectively, compared to the experimental values of 27.1% and 88.34%. The similarity factor (f 2) for the check-point batch being 80.25 showed that the two dissolution profiles were similar. The drug release from the in-situ gel follows the Higuchi model, which indicates a diffusion-controlled release. A stomach specific in-situ gel of baclofen could be prepared using floating mechanism to increase the residence time of the drug in stomach and thereby increase the absorption.
Visualization of Unsteady Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient problems require dealing with time.
NASA Technical Reports Server (NTRS)
Gosney, W. M.
1977-01-01
Electrically alterable read-only memories (EAROM's) or reprogrammable read-only memories (RPROM's) can be fabricated using a single-level metal-gate p-channel MOS technology with all conventional processing steps. Given the acronym DIFMOS for dual-injector floating-gate MOS, this technology utilizes the floating-gate technique for nonvolatile storage of data. Avalanche injection of hot electrons through gate oxide from a special injector diode in each bit is used to charge the floating gates. A second injector structure included in each bit permits discharge of the floating gate by avalanche injection of holes through gate oxide. The overall design of the DIFMOS bit is dictated by the physical considerations required for each of the avalanche injector types. The end result is a circuit technology which can provide fully decoded bit-erasable EAROM-type circuits using conventional manufacturing techniques.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent
NASA Astrophysics Data System (ADS)
Zaghloul, A. R. M.; Zaghloul, Y. A.
2014-06-01
We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)
Triana Safehold: A New Gyroless, Sun-Pointing Attitude Controller
NASA Technical Reports Server (NTRS)
Chen, J.; Morgenstern, Wendy; Garrick, Joseph
2001-01-01
Triana is a single-string spacecraft to be placed in a halo orbit about the sun-earth Ll Lagrangian point. The Attitude Control Subsystem (ACS) hardware includes four reaction wheels, ten thrusters, six coarse sun sensors, a star tracker, and a three-axis Inertial Measuring Unit (IMU). The ACS Safehold design features a gyroless sun-pointing control scheme using only sun sensors and wheels. With this minimum hardware approach, Safehold increases mission reliability in the event of a gyroscope anomaly. In place of the gyroscope rate measurements, Triana Safehold uses wheel tachometers to help provide a scaled estimation of the spacecraft body rate about the sun vector. Since Triana nominally performs momentum management every three months, its accumulated system momentum can reach a significant fraction of the wheel capacity. It is therefore a requirement for Safehold to maintain a sun-pointing attitude even when the spacecraft system momentum is reasonably large. The tachometer sun-line rate estimation enables the controller to bring the spacecraft close to its desired sun-pointing attitude even with reasonably high system momentum and wheel drags. This paper presents the design rationale behind this gyroless controller, stability analysis, and some time-domain simulation results showing performances with various initial conditions. Finally, suggestions for future improvements are briefly discussed.
Shahan, M.R.; Seaman, C.E.; Beck, T.W.; Colinet, J.F.; Mischler, S.E.
2017-01-01
Float coal dust is produced by various mining methods, carried by ventilating air and deposited on the floor, roof and ribs of mine airways. If deposited, float dust is re-entrained during a methane explosion. Without sufficient inert rock dust quantities, this float coal dust can propagate an explosion throughout mining entries. Consequently, controlling float coal dust is of critical interest to mining operations. Rock dusting, which is the adding of inert material to airway surfaces, is the main control technique currently used by the coal mining industry to reduce the float coal dust explosion hazard. To assist the industry in reducing this hazard, the Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health initiated a project to investigate methods and technologies to reduce float coal dust in underground coal mines through prevention, capture and suppression prior to deposition. Field characterization studies were performed to determine quantitatively the sources, types and amounts of dust produced during various coal mining processes. The operations chosen for study were a continuous miner section, a longwall section and a coal-handling facility. For each of these operations, the primary dust sources were confirmed to be the continuous mining machine, longwall shearer and conveyor belt transfer points, respectively. Respirable and total airborne float dust samples were collected and analyzed for each operation, and the ratio of total airborne float coal dust to respirable dust was calculated. During the continuous mining process, the ratio of total airborne float coal dust to respirable dust ranged from 10.3 to 13.8. The ratios measured on the longwall face were between 18.5 and 21.5. The total airborne float coal dust to respirable dust ratio observed during belt transport ranged between 7.5 and 21.8. PMID:28936001
Hardware Simulations of Spacecraft Attitude Synchronization Using Lyapunov-Based Controllers
NASA Astrophysics Data System (ADS)
Jung, Juno; Park, Sang-Young; Eun, Youngho; Kim, Sung-Woo; Park, Chandeok
2018-04-01
In the near future, space missions with multiple spacecraft are expected to replace traditional missions with a single large spacecraft. These spacecraft formation flying missions generally require precise knowledge of relative position and attitude between neighboring agents. In this study, among the several challenging issues, we focus on the technique to control spacecraft attitude synchronization in formation. We develop a number of nonlinear control schemes based on the Lyapunov stability theorem and considering special situations: full-state feedback control, full-state feedback control with unknown inertia parameters, and output feedback control without angular velocity measurements. All the proposed controllers offer absolute and relative control using reaction wheel assembly for both regulator and tracking problems. In addition to the numerical simulations, an air-bearing-based hardware-in-the-loop (HIL) system is used to verify the proposed control laws in real-time hardware environments. The pointing errors converge to 0.5{°} with numerical simulations and to 2{°} using the HIL system. Consequently, both numerical and hardware simulations confirm the performance of the spacecraft attitude synchronization algorithms developed in this study.
Correlation of ISS Electric Potential Variations with Mission Operations
NASA Technical Reports Server (NTRS)
Willis, Emily M.; Minow, Joseph I.; Parker, Linda Neergaard
2014-01-01
Orbiting approximately 400 km above the Earth, the International Space Station (ISS) is a unique research laboratory used to conduct ground-breaking science experiments in space. The ISS has eight Solar Array Wings (SAW), and each wing is 11.7 meters wide and 35.1 meters long. The SAWs are controlled individually to maximize power output, minimize stress to the ISS structure, and minimize interference with other ISS operations such as vehicle dockings and Extra-Vehicular Activities (EVA). The Solar Arrays are designed to operate at 160 Volts. These large, high power solar arrays are negatively grounded to the ISS and collect charged particles (predominately electrons) as they travel through the space plasma in the Earth's ionosphere. If not controlled, this collected charge causes floating potential variations which can result in arcing, causing injury to the crew during an EVA or damage to hardware [1]. The environmental catalysts for ISS floating potential variations include plasma density and temperature fluctuations and magnetic induction from the Earth's magnetic field. These alone are not enough to cause concern for ISS, but when they are coupled with the large positive potential on the solar arrays, floating potentials up to negative 95 Volts have been observed. Our goal is to differentiate the operationally induced fluctuations in floating potentials from the environmental causes. Differentiating will help to determine what charging can be controlled, and we can then design the proper operations controls for charge collection mitigation. Additionally, the knowledge of how high power solar arrays interact with the environment and what regulations or design techniques can be employed to minimize charging impacts can be applied to future programs.
Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
General purpose free floating platform for KC-135 flight experimentation
NASA Technical Reports Server (NTRS)
Borchers, Bruce A.; Yendler, Boris S.; Kliss, Mark H.; Gonzales, Andrew A.; Edwards, Mark T.
1994-01-01
The Controlled Ecological Life Support Systems (CELSS) program is evaluating higher plants as a means of providing life support functions aboard space craft. These plant systems will be capable of regenerating air and water while meeting some of the food requirements of the crew. In order to grow plants in space, a series of systems are required to provide the necessary plant support functions. Some of the systems required for CELSS experiments are such that is is likely that existing technologies will require refinement, or novel technologies will need to be developed. To evaluate and test these technologies, a series of KC-135 precursor flights are being proposed. A general purpose free floating experiment platform is being developed to allow the KC-135 flights to be used to their fullest. This paper will outline the basic design for the CELSS Free Floating Test Bed (FFTB), and the requirements for the individual subsystems. Several preliminary experiments suitable for the free floater will also be discussed.
NASA Astrophysics Data System (ADS)
Barnett, Barry S.; Bovik, Alan C.
1995-04-01
This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.
Miller, J G; Wolf, F M
1996-01-01
Strategies for implementing instructional technology are based on recent experiences at the University of Michigan Medical Center. The issues covered include 1) addressing facilities, hardware, and staffing needs, 2) determining learners' skill requirements and appropriate training activities, and 3) selecting and customizing educational software. Many examples are provided, and nine key points for success are emphasized. PMID:8653447
Effective correlator for RadioAstron project
NASA Astrophysics Data System (ADS)
Sergeev, Sergey
This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.
Pani, Danilo; Barabino, Gianluca; Citi, Luca; Meloni, Paolo; Raspopovic, Stanisa; Micera, Silvestro; Raffo, Luigi
2016-09-01
The control of upper limb neuroprostheses through the peripheral nervous system (PNS) can allow restoring motor functions in amputees. At present, the important aspect of the real-time implementation of neural decoding algorithms on embedded systems has been often overlooked, notwithstanding the impact that limited hardware resources have on the efficiency/effectiveness of any given algorithm. Present study is addressing the optimization of a template matching based algorithm for PNS signals decoding that is a milestone for its real-time, full implementation onto a floating-point digital signal processor (DSP). The proposed optimized real-time algorithm achieves up to 96% of correct classification on real PNS signals acquired through LIFE electrodes on animals, and can correctly sort spikes of a synthetic cortical dataset with sufficiently uncorrelated spike morphologies (93% average correct classification) comparably to the results obtained with top spike sorter (94% on average on the same dataset). The power consumption enables more than 24 h processing at the maximum load, and latency model has been derived to enable a fair performance assessment. The final embodiment demonstrates the real-time performance onto a low-power off-the-shelf DSP, opening to experiments exploiting the efferent signals to control a motor neuroprosthesis.
Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids
NASA Astrophysics Data System (ADS)
Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu
2013-01-01
Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.
Smart Antenna UKM Testbed for Digital Beamforming System
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Misran, Norbahiah; Yatim, Baharudin
2009-12-01
A new design of smart antenna testbed developed at UKM for digital beamforming purpose is proposed. The smart antenna UKM testbed developed based on modular design employing two novel designs of L-probe fed inverted hybrid E-H (LIEH) array antenna and software reconfigurable digital beamforming system (DBS). The antenna is developed based on using the novel LIEH microstrip patch element design arranged into [InlineEquation not available: see fulltext.] uniform linear array antenna. An interface board is designed to interface to the ADC board with the RF front-end receiver. The modular concept of the system provides the capability to test the antenna hardware, beamforming unit, and beamforming algorithm in an independent manner, thus allowing the smart antenna system to be developed and tested in parallel, hence reduces the design time. The DBS was developed using a high-performance [InlineEquation not available: see fulltext.] floating-point DSP board and a 4-channel RF front-end receiver developed in-house. An interface board is designed to interface to the ADC board with the RF front-end receiver. A four-element receiving array testbed at 1.88-2.22 GHz frequency is constructed, and digital beamforming on this testbed is successfully demonstrated.
Lasercom system architecture with reduced complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homayoon (Inventor)
1994-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention, a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides the means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
LaserCom System Architecture With Reduced Complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homa-Yoon (Inventor)
1996-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
What is the size of a floating sheath? An answer
NASA Astrophysics Data System (ADS)
Voigt, Farina; Naggary, Schabnam; Brinkmann, Ralf Peter
2016-09-01
The formation of a non-neutral boundary sheath in front of material surfaces is universal plasma phenomenon. Despite several decades of research, however, not all related issues are fully clarified. In a recent paper, Chabert pointed out that this lack of clarity applies even to the seemingly innocuous question ``What the size of a floating sheath?'' This contribution attempts to provide an answer that is not arbitrary: The size of a floating sheath is defined as the plate separation of an equivalent parallel plate capacitor. The consequences of the definition are explored with the help of a self-consistent sheath model, and a comparison is made with other sheath size definitions. Deutsche Forschungsgemeinschaft within SFB TR 87.
Stratospheric Balloon Platforms for Near Space Access
NASA Astrophysics Data System (ADS)
Dewey, R. G.
2012-12-01
For over five decades, high altitude aerospace balloon platforms have provided a unique vantage point for space and geophysical research by exposing scientific instrument packages and experiments to space-like conditions above 99% of Earth's atmosphere. Reaching altitudes in excess of 30 km for durations ranging from hours to weeks, high altitude balloons offer longer flight durations than both traditional sounding rockets and emerging suborbital reusable launch vehicles. For instruments and experiments requiring access to high altitudes, engineered balloon systems provide a timely, responsive, flexible, and cost-effective vehicle for reaching near space conditions. Moreover, high altitude balloon platforms serve as an early means of testing and validating hardware bound for suborbital or orbital space without imposing space vehicle qualifications and certification requirements on hardware in development. From float altitudes above 30 km visible obscuration of the sky is greatly reduced and telescopes and other sensors function in an orbit-like environment, but in 1g. Down-facing sensors can take long-exposure atmospheric measurements and images of Earth's surface from oblique and nadir perspectives. Payload support subsystems such as telemetry equipment and command, control, and communication (C3) interfaces can also be tested and operationally verified in this space-analog environment. For scientific payloads requiring over-flight of specific areas of interests, such as an active volcano or forest region, advanced mission planning software allows flight trajectories to be accurately modeled. Using both line-of-sight and satellite-based communication systems, payloads can be tracked and controlled throughout the entire mission duration. Under NASA's Flight Opportunities Program, NSC can provide a range of high altitude flight options to support space and geophysical research: High Altitude Shuttle System (HASS) - A balloon-borne semi-autonomous glider carries payloads to high altitude and returns them safely to pre-selected landing sites, supporting quick recovery, refurbishment, and re-flight. Small Balloon System (SBS) - Controls payload interfaces via a standardized avionics system. Using a parachute for recovery, the SBS is well suited for small satellite and spacecraft subsystem developers wanting to raise their Technology Readiness Level (TRL) in an operationally relevant environment. Provides flexibility for scientific payloads requiring externally mounted equipment, such as telescopes and antennas. Nano Balloon System (NBS) - For smaller payloads (~CubeSats) with minimal C3 requirements, the Nano Balloon System (NBS) operates under less restrictive flight regulations with increased operational flexibility. The NBS is well suited for payload providers seeking a quick, simple, and cost effective solution for operating small ~passive payloads in near space. High altitude balloon systems offer the payload provider and experimenter a unique and flexible platform for geophysical and space research. Though new launch vehicles continue to expand access to suborbital and orbital space, recent improvements in high altitude balloon technology and operations provide a cost effective alternative to access space-like conditions.
Sparse matrix-vector multiplication on network-on-chip
NASA Astrophysics Data System (ADS)
Sun, C.-C.; Götze, J.; Jheng, H.-Y.; Ruan, S.-J.
2010-12-01
In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.
FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution
NASA Astrophysics Data System (ADS)
Lakhotia, Kiran; Tillmann, Nikolai; Harman, Mark; de Halleux, Jonathan
Recently there has been an upsurge of interest in both, Search-Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST-based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
NASA Technical Reports Server (NTRS)
1999-01-01
The full complement of EDOMP investigations called for a broad spectrum of flight hardware ranging from commercial items, modified for spaceflight, to custom designed hardware made to meet the unique requirements of testing in the space environment. In addition, baseline data collection before and after spaceflight required numerous items of ground-based hardware. Two basic categories of ground-based hardware were used in EDOMP testing before and after flight: (1) hardware used for medical baseline testing and analysis, and (2) flight-like hardware used both for astronaut training and medical testing. To ensure post-landing data collection, hardware was required at both the Kennedy Space Center (KSC) and the Dryden Flight Research Center (DFRC) landing sites. Items that were very large or sensitive to the rigors of shipping were housed permanently at the landing site test facilities. Therefore, multiple sets of hardware were required to adequately support the prime and backup landing sites plus the Johnson Space Center (JSC) laboratories. Development of flight hardware was a major element of the EDOMP. The challenges included obtaining or developing equipment that met the following criteria: (1) compact (small size and light weight), (2) battery-operated or requiring minimal spacecraft power, (3) sturdy enough to survive the rigors of spaceflight, (4) quiet enough to pass acoustics limitations, (5) shielded and filtered adequately to assure electromagnetic compatibility with spacecraft systems, (6) user-friendly in a microgravity environment, and (7) accurate and efficient operation to meet medical investigative requirements.
On-patient see-through augmented reality based on visual SLAM.
Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M
2017-01-01
An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.
PHYSIOLOGICAL DYNAMICS OF SIPHONOPHORES FROM DEEP SCATTERING LAYERS.
Effects of siphonophores on sound propagation in the sea were studied by determining the size of gas bubbles they contain and produce, and the times...volumes, and rates involved in these processes. Major findings were: (1) gases contained in fresh siphonophore floats are generally close to...constants for siphonophore floats are close to those for chitin; (4) calculated energy requirements for countering hydrostatic pressures indicate that float refilling times are probably no more than a few hours. (Author)
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
[Exploring nurse, usage effectiveness of mobile nursing station].
Chang, Fang-Mei; Lee, Ting-Ting
2013-04-01
A mobile nursing station is an innovative cart that integrates a wireless network, information technology devices, and online charts. In addition to improving clinical work and workflow efficiencies, data is integrated among different information systems and hardware devices to promote patient safety. This study investigated the effectiveness of mobile nursing cart use. We compared different distributions of nursing activity working samples to evaluate the nursing information systems in terms of interface usability and usage outcomes. There were two parts of this study. Part one used work sampling to collect nursing activity data necessary to compare a unit that used a mobile nursing cart (mobile group, n = 18) with another that did not (traditional group, n = 14). Part two applied a nursing information system interface usability questionnaire to survey the mobile unit with nurses who had used a mobile nursing station (including those who had worked in this unit as floating nurses) (n = 30) in order to explore interface usability and effectiveness. We found that using the mobile nursing station information system increased time spent on direct patient care and decreased time spent on indirect patient care and documentation. Results further indicated that participants rated interface usability as high and evaluated usage effectiveness positively. Comments made in the open-ended question section raised several points of concern, including problems / inadequacies related to hardware devices, Internet speed, and printing. This study indicates that using mobile nursing station can improve nursing activity distributions and that nurses hold generally positive attitudes toward mobile nursing station interface usability and usage effectiveness. The authors thus encourage the continued implementation of mobile nursing stations and related studies to further enhance clinical nursing care.
Competition for light and nutrients in layered communities of aquatic plants.
van Gerven, Luuk P A; de Klein, Jeroen J M; Gerla, Daan J; Kooi, Bob W; Kuiper, Jan J; Mooij, Wolf M
2015-07-01
Dominance of free-floating plants poses a threat to biodiversity in many freshwater ecosystems. Here we propose a theoretical framework to understand this dominance, by modeling the competition for light and nutrients in a layered community of floating and submerged plants. The model shows that at high supply of light and nutrients, floating plants always dominate due to their primacy for light, even when submerged plants have lower minimal resource requirements. The model also shows that floating-plant dominance cannot be an alternative stable state in light-limited environments but only in nutrient-limited environments, depending on the plants' resource consumption traits. Compared to unlayered communities, the asymmetry in competition for light-coincident with symmetry in competition for nutrients-leads to fundamentally different results: competition outcomes can no longer be predicted from species traits such as minimal resource requirements ([Formula: see text] rule) and resource consumption. Also, the same two species can, depending on the environment, coexist or be alternative stable states. When applied to two common plant species in temperate regions, both the model and field data suggest that floating-plant dominance is unlikely to be an alternative stable state.
NASA Astrophysics Data System (ADS)
Armono, H. D.; Mahaputra, B. G.; Zikra, M.
2018-03-01
Floating cages is one of the methods of fish farming (aqua culture) that can be developed at rivers, lakes or seas. To determine a proper location for floating cages, there are some requirements that need to be fulfilled to maintain sustainibility of floating cages. Those requirements are the quality of the environment. This paper will discuss the selection of best location for aquaculture activities using Weighted Overlay method in the Geographical Information System, based on the the concentration of chlorophyll-a, sea surface temperature presented by Aqua MODIS Level 1b satellite images. The satellite data will be associated with the measured field data on March and October 2016. The study take place on Prigi Bay, at Trenggalek Regency, East Java. Based on spatial analysis in the Geographical Information System, the Prigi bay generally suitable for aquaculture activities using floating net cages. The result of Weighted Overlay combinations in both periods showed a mean score of 2.18 of 3 where 8.33 km2 (23.13% of the water area) considered as "very suitable" and 27.67 km2 (76.87% of water area) considered "suitable".
47 CFR 80.1001 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE... the deck excluding sheer, while navigating; and (d) Every dredge and floating plant engaged, in or... unmanned or intermittently manned floating plant under the control of a dredge shall not be required to...
NASA Technical Reports Server (NTRS)
Kramer, H. G.
1981-01-01
The power needed to zone silicon crystals by radio frequency heating was analyzed. The heat loss mechanisms are examined. Curves are presented for power as a function of crystal diameter for commercial silicon zoning.
47 CFR 80.1001 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE... the deck excluding sheer, while navigating; and (d) Every dredge and floating plant engaged, in or... unmanned or intermittently manned floating plant under the control of a dredge shall not be required to...
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping
2018-03-01
The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.
Horiuchi, Tsutomu; Tobita, Tatsuya; Miura, Toru; Iwasaki, Yuzuru; Seyama, Michiko; Inoue, Suzuyo; Takahashi, Jun-ichi; Haga, Tsuneyuki; Tamechika, Emi
2012-01-01
We have developed a measurement chip installation/removal mechanism for a surface plasmon resonance (SPR) immunoassay analysis instrument designed for frequent testing, which requires a rapid and easy technique for changing chips. The key components of the mechanism are refractive index matching gel coated on the rear of the SPR chip and a float that presses the chip down. The refractive index matching gel made it possible to optically couple the chip and the prism of the SPR instrument easily via elastic deformation with no air bubbles. The float has an autonomous attitude control function that keeps the chip parallel in relation to the SPR instrument by employing the repulsive force of permanent magnets between the float and a float guide located in the SPR instrument. This function is realized by balancing the upward elastic force of the gel and the downward force of the float, which experiences a leveling force from the float guide. This system makes it possible to start an SPR measurement immediately after chip installation and to remove the chip immediately after the measurement with a simple and easy method that does not require any fine adjustment. Our sensor chip, which we installed using this mounting system, successfully performed an immunoassay measurement on a model antigen (spiked human-IgG) in a model real sample (non-homogenized milk) that included many kinds of interfering foreign substances without any sample pre-treatment. The ease of the chip installation/removal operation and simple measurement procedure are suitable for frequent on-site agricultural, environmental and medical testing. PMID:23202030
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, T.; Atac, R.; Cook, A.
1989-03-06
The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less
Yang, Jiangxia; Xiao, Hong
2015-08-01
To explore the improvement of hand motion function,spasm and self-care ability of daily life for stroke patients treated with floating-needle combined with rehabilitation training. Eighty hand spasm patients of post-stroke within one year after stroke were randomly divided into an observation group and a control group, 40 cases in each one. In the two groups, rehabilitation was adopted for eight weeks,once a day,40 min one time. In the observation group, based on the above treatment and according to muscle fascia trigger point, 2~3 points in both the internal and external sides of forearm were treated with floating-needle. The positive or passive flexion and extension of wrist and knuckle till the relief of spasm hand was combined. The floating-needle therapy was given for eight weeks, on the first three days once a day and later once every other day. Modified Ashworth Scale(MAS), activity of daily life(ADL, Barthel index) scores and Fugl-Meyer(FMA) scores were used to assess the spasm hand degree,activity of daily life and hand motion function before and after 7-day, 14-day and 8-week treatment. After 7-day, 14-day and 8-week treatment, MAS scores were apparently lower than those before treatment in the two groups(all P<0. 05), and Barthel scores and FMA scores were obviously higher than those before-treatment(all P<0. 05). After 14-day and 8-week treatment, FMA scores in the observation group were markedly higher than those in the control group(both P<0. 05). Floating-needle therapy combined with rehabilitation training and simple rehabilitation training could both improve hand spasm degree, hand function and activity of daily life of post-stroke patients, but floating-needle therapy combined with rehabilitation training is superior to simple rehabilitation training for the improvement of hand function.
Floating shoulders: Clinical and radiographic analysis at a mean follow-up of 11 years
Pailhes, ReÌ gis; Bonnevialle, Nicolas; Laffosse, JeanMichel; Tricoire, JeanLouis; Cavaignac, Etienne; Chiron, Philippe
2013-01-01
Context: The floating shoulder (FS) is an uncommon injury, which can be managed conservatively or surgically. The therapeutic option remains controversial. Aims: The goal of our study was to evaluate the long-term results and to identify predictive factors of functional outcomes. Settings and Design: Retrospective monocentric study. Materials and Methods: Forty consecutive FS were included (24 nonoperated and 16 operated) from 1984 to 2009. Clinical results were assessed with Simple Shoulder Test (SST), Oxford Shoulder Score (OSS), Single Assessment Numeric Evaluation (SANE), Short Form-12 (SF12), Disabilities of the Arm Shoulder and Hand score (DASH), and Constant score (CST). Plain radiographs were reviewed to evaluate secondary displacement, fracture healing, and modification of the lateral offset of the gleno-humeral joint (chest X-rays). New radiographs were made to evaluate osteoarthritis during follow-up. Statistical Analysis Used: T-test, Mann-Whitney test, and the Pearson's correlation coefficient were used. The significance level was set at 0.05. Results: At mean follow-up of 135 months (range 12-312), clinical results were satisfactory regarding different mean scores: SST 10.5 points, OSS 14 points, SANE 81%, SF12 (50 points and 60 points), DASH 14.5 points and CST 84 points. There were no significant differences between operative and non-operative groups. However, the loss of lateral offset influenced the results negatively. Osteoarthritis was diagnosed in five patients (12.5%) without correlation to fracture patterns and type of treatment. Conclusions: This study advocates that floating shoulder may be treated conservatively and surgically with satisfactory clinical long-term outcomes. However, the loss of gleno-humeral lateral offset should be evaluated carefully before taking a therapeutic option. PMID:23960364
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
Code of Federal Regulations, 2010 CFR
2010-07-01
... collection point for stormwater runoff received directly from refinery surfaces and for refinery wastewater... chamber in a stationary manner and which does not move with fluctuations in wastewater levels. Floating... separator. Junction box means a manhole or access point to a wastewater sewer system line. No detectable...
NASA Technical Reports Server (NTRS)
Boriakoff, Valentin
1994-01-01
The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).
Extending the BEAGLE library to a multi-FPGA platform
2013-01-01
Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
On Using Commercial Off-the-Shelf (COTS) Electronic Products in Space
NASA Technical Reports Server (NTRS)
Culpepper, William X.
2002-01-01
NASA's Johnson Space Center (JSC) has utilized COTS products in its programs since the early 1990's. Recently it has become evident that, of all failure modes possible, radiation will probably dominate; sometimes to the point of driving system architecture. It is now imperative that radiation susceptibility be addressed when writing the system requirements. Susceptibility assessment, e.g. testing, must begin early in the design phase to establish performance and continue through the hardware qualification program to prove satisfaction of the original requirements(s). Examples of requirements, testing, and architecture versus failure rate will be given.
Combined GPS/GLONASS Precise Point Positioning with Fixed GPS Ambiguities
Pan, Lin; Cai, Changsheng; Santerre, Rock; Zhu, Jianjun
2014-01-01
Precise point positioning (PPP) technology is mostly implemented with an ambiguity-float solution. Its performance may be further improved by performing ambiguity-fixed resolution. Currently, the PPP integer ambiguity resolutions (IARs) are mainly based on GPS-only measurements. The integration of GPS and GLONASS can speed up the convergence and increase the accuracy of float ambiguity estimates, which contributes to enhancing the success rate and reliability of fixing ambiguities. This paper presents an approach of combined GPS/GLONASS PPP with fixed GPS ambiguities (GGPPP-FGA) in which GPS ambiguities are fixed into integers, while all GLONASS ambiguities are kept as float values. An improved minimum constellation method (MCM) is proposed to enhance the efficiency of GPS ambiguity fixing. Datasets from 20 globally distributed stations on two consecutive days are employed to investigate the performance of the GGPPP-FGA, including the positioning accuracy, convergence time and the time to first fix (TTFF). All datasets are processed for a time span of three hours in three scenarios, i.e., the GPS ambiguity-float solution, the GPS ambiguity-fixed resolution and the GGPPP-FGA resolution. The results indicate that the performance of the GPS ambiguity-fixed resolutions is significantly better than that of the GPS ambiguity-float solutions. In addition, the GGPPP-FGA improves the positioning accuracy by 38%, 25% and 44% and reduces the convergence time by 36%, 36% and 29% in the east, north and up coordinate components over the GPS-only ambiguity-fixed resolutions, respectively. Moreover, the TTFF is reduced by 27% after adding GLONASS observations. Wilcoxon rank sum tests and chi-square two-sample tests are made to examine the significance of the improvement on the positioning accuracy, convergence time and TTFF. PMID:25237901
Investigation of field induced trapping on floating gates
NASA Technical Reports Server (NTRS)
Gosney, W. M.
1975-01-01
The development of a technology for building electrically alterable read only memories (EAROMs) or reprogrammable read only memories (RPROMs) using a single level metal gate p channel MOS process with all conventional processing steps is outlined. Nonvolatile storage of data is achieved by the use of charged floating gate electrodes. The floating gates are charged by avalanche injection of hot electrodes through gate oxide, and discharged by avalanche injection of hot holes through gate oxide. Three extra diffusion and patterning steps are all that is required to convert a standard p channel MOS process into a nonvolatile memory process. For identification, this nonvolatile memory technology was given the descriptive acronym DIFMOS which stands for Dual Injector, Floating gate MOS.
Parametric study of two-body floating-point wave absorber
NASA Astrophysics Data System (ADS)
Amiri, Atena; Panahi, Roozbeh; Radfar, Soheil
2016-03-01
In this paper, we present a comprehensive numerical simulation of a point wave absorber in deep water. Analyses are performed in both the frequency and time domains. The converter is a two-body floating-point absorber (FPA) with one degree of freedom in the heave direction. Its two parts are connected by a linear mass-spring-damper system. The commercial ANSYS-AQWA software used in this study performs well in considering validations. The velocity potential is obtained by assuming incompressible and irrotational flow. As such, we investigated the effects of wave characteristics on energy conversion and device efficiency, including wave height and wave period, as well as the device diameter, draft, geometry, and damping coefficient. To validate the model, we compared our numerical results with those from similar experiments. Our study results can clearly help to maximize the converter's efficiency when considering specific conditions.
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... receivers, computer hardware for electronic fish ticket software and computer hardware for electronic logbook software. (b) Performance and technical requirements for scales used to weigh catch at sea... ticket software provided by Pacific States Marine Fish Commission are required to meet the hardware and...
NASA Technical Reports Server (NTRS)
Reaves, Will F.; Hoberecht, Mark A.
2003-01-01
The Fuel Cell has been used for manned space flight since the Gemini program. Its power output and water production capability over long durations for the mass and volume are critical for manned space-flight requirements. The alkaline fuel cell used on the Shuttle, while very reliable and capable for it s application, has operational sensitivities, limited life, and an expensive recycle cost. The PEM fuel cell offers many potential improvements in those areas. NASA Glenn Research Center is currently leading a PEM fuel cell development and test program intended to move the technology closer to the point required for manned space-flight consideration. This paper will address the advantages of PEM fuel cell technology and its potential for future space flight as compared to existing alkaline fuel cells. It will also cover the technical hurdles that must be overcome. In addition, a description of the NASA PEM fuel cell development program will be presented, and the current status of this effort discussed. The effort is a combination of stack and ancillary component hardware development, culminating in breadboard and engineering model unit assembly and test. Finally, a detailed roadmap for proceeding fiom engineering model hardware to qualification and flight hardware will be proposed. Innovative test engineering and potential payload manifesting may be required to actually validate/certify a PEM fuel cell for manned space flight.
NASA Technical Reports Server (NTRS)
Willis, Emily M.; Minow, Joseph I.; Parker, Linda N.; Pour, Maria Z. A.; Swenson, Charles; Nishikawa, Ken-ichi; Krause, Linda Habash
2016-01-01
The International Space Station (ISS) continues to be a world-class space research laboratory after over 15 years of operations, and it has proven to be a fantastic resource for observing spacecraft floating potential variations related to high voltage solar array operations in Low Earth Orbit (LEO). Measurements of the ionospheric electron density and temperature along the ISS orbit and variations in the ISS floating potential are obtained from the Floating Potential Measurement Unit (FPMU). In particular, rapid variations in ISS floating potential during solar array operations on time scales of tens of milliseconds can be recorded due to the 128 Hz sample rate of the Floating Potential Probe (FPP) pro- viding interesting insight into high voltage solar array interaction with the space plasma environment. Comparing the FPMU data with the ISS operations timeline and solar array data provides a means for correlating some of the more complex and interesting transient floating potential variations with mission operations. These complex variations are not reproduced by current models and require further study to understand the underlying physical processes. In this paper we present some of the floating potential transients observed over the past few years along with the relevant space environment parameters and solar array operations data.
An array processing system for lunar geochemical and geophysical data
NASA Technical Reports Server (NTRS)
Eliason, E. M.; Soderblom, L. A.
1977-01-01
A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
Hardware Testing for the Optical PAyload for Lasercomm Science (OPALS)
NASA Technical Reports Server (NTRS)
Slagle, Amanda
2011-01-01
Hardware for several subsystems of the proposed Optical PAyload for Lasercomm Science (OPALS), including the gimbal and avionics, was tested. Microswitches installed on the gimbal were evaluated to verify that their point of actuation would remain within the acceptable range even if the switches themselves move slightly during launch. An inspection of the power board was conducted to ensure that all power and ground signals were isolated, that polarized components were correctly oriented, and that all components were intact and securely soldered. Initial testing on the power board revealed several minor problems, but once they were fixed the power board was shown to function correctly. All tests and inspections were documented for future use in verifying launch requirements.
The anatomy of floating shock fitting. [shock waves computation for flow field
NASA Technical Reports Server (NTRS)
Salas, M. D.
1975-01-01
The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1994-01-01
This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Engineering the Ideal Gigapixel Image Viewer
NASA Astrophysics Data System (ADS)
Perpeet, D. Wassenberg, J.
2011-09-01
Despite improvements in automatic processing, analysts are still faced with the task of evaluating gigapixel-scale mosaics or images acquired by telescopes such as Pan-STARRS. Displaying such images in ‘ideal’ form is a major challenge even today, and the amount of data will only increase as sensor resolutions improve. In our opinion, the ideal viewer has several key characteristics. Lossless display - down to individual pixels - ensures all information can be extracted from the image. Support for all relevant pixel formats (integer or floating point) allows displaying data from different sensors. Smooth zooming and panning in the high-resolution data enables rapid screening and navigation in the image. High responsiveness to input commands avoids frustrating delays. Instantaneous image enhancement, e.g. contrast adjustment and image channel selection, helps with analysis tasks. Modest system requirements allow viewing on regular workstation computers or even laptops. To the best of our knowledge, no such software product is currently available. Meeting these goals requires addressing certain realities of current computer architectures. GPU hardware accelerates rendering and allows smooth zooming without high CPU load. Programmable GPU shaders enable instant channel selection and contrast adjustment without any perceptible slowdown or changes to the input data. Relatively low disk transfer speeds suggest the use of compression to decrease the amount of data to transfer. Asynchronous I/O allows decompressing while waiting for previous I/O operations to complete. The slow seek times of magnetic disks motivate optimizing the order of the data on disk. Vectorization and parallelization allow significant increases in computational capacity. Limited memory requires streaming and caching of image regions. We develop a viewer that takes the above issues into account. Its awareness of the computer architecture enables previously unattainable features such as smooth zooming and image enhancement within high-resolution data. We describe our implementation, disclosing its novel file format and lossless image codec whose decompression is faster than copying the raw data in memory. Both provide crucial performance boosts compared to conventional approaches. Usability tests demonstrate the suitability of our viewer for rapid analysis of large SAR datasets, multispectral satellite imagery and mosaics.
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Rockel, Stanislav; Holling, Michael; Peinke, Joachim; Cal, Raul Bayon
2017-11-01
The wake behind a floating model horizontal axis wind turbine during pitch motion is investigated and compared to a fixed wind turbine wake. An experiment is conducted in an acoustic wind tunnel where hot-wire data are acquired at five downstream locations. At each downstream location, a rake of 16 hot-wires was used with placement of the probes increasing radially in the vertical, horizontal, and diagonally at 45 deg. In addition, the effect of turbulence intensity on the floating wake is examined by subjecting the wind turbine to different inflow conditions controlled through three settings in the wind tunnel grid, a passive and two active protocols, thus varying in intensity. The wakes are inspected by statistics of the point measurements, where the various length/time scales are considered. The wake characteristics for a floating wind turbine are compared to a fixed turbine, and uncovering its features; relevant as the demand for exploiting deep waters in wind energy is increasing.
36 CFR 327.30 - Shoreline Management on Civil Works Projects.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Development Areas for ski jumps, floats, boat moorage facilities, duck blinds, and other private floating recreation facilities when they will not create a safety hazard and inhibit public use or enjoyment of project waters or shoreline. A Corps permit is not required for temporary ice fishing shelters or duck...
36 CFR 327.30 - Shoreline Management on Civil Works Projects.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Development Areas for ski jumps, floats, boat moorage facilities, duck blinds, and other private floating recreation facilities when they will not create a safety hazard and inhibit public use or enjoyment of project waters or shoreline. A Corps permit is not required for temporary ice fishing shelters or duck...
NASA Astrophysics Data System (ADS)
Liu, Chunsen; Yan, Xiao; Song, Xiongfei; Ding, Shijin; Zhang, David Wei; Zhou, Peng
2018-05-01
As conventional circuits based on field-effect transistors are approaching their physical limits due to quantum phenomena, semi-floating gate transistors have emerged as an alternative ultrafast and silicon-compatible technology. Here, we show a quasi-non-volatile memory featuring a semi-floating gate architecture with band-engineered van der Waals heterostructures. This two-dimensional semi-floating gate memory demonstrates 156 times longer refresh time with respect to that of dynamic random access memory and ultrahigh-speed writing operations on nanosecond timescales. The semi-floating gate architecture greatly enhances the writing operation performance and is approximately 106 times faster than other memories based on two-dimensional materials. The demonstrated characteristics suggest that the quasi-non-volatile memory has the potential to bridge the gap between volatile and non-volatile memory technologies and decrease the power consumption required for frequent refresh operations, enabling a high-speed and low-power random access memory.
Distributed Simulation Testing for Weapons System Performance of the F/A-18 and AIM-120 AMRAAM
1998-01-01
Support Facility (WSSF) at China Lake, CA and the AIM-120 Hardware in the Loop (HWIL) laboratory at Point Mugu, CA. The link was established in response to...ROCKET MOTOR TARGET DETECTION (FUZE) SEEKERIASSEMBLYWAH D . ANTENN ’ A TRA-kN.SiV, ITfrER’I" ACTUATOR ELECTRONICS DATA LIX -K PARAMETERS ADIMI20AI AIMI...test series. 3.2 Hardware in the Loop : The AMRAAM Hardware-In-the- Loop (HWIL) lab located at the Naval Air Warfare Center in Point Mugu, CA provides
Hardware-assisted software clock synchronization for homogeneous distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.
1990-01-01
A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.
NASA Astrophysics Data System (ADS)
Farahmand, Farnaz; Ghasemzadeh, Bahar; Naseri, Abdolhossein
2018-01-01
An air assisted liquid-liquid microextraction by applying the solidification of a floating organic droplet method (AALLME-SFOD) coupled with a multivariate calibration method, namely partial least squares (PLS), was introduced for the fast and easy determination of Atenolol (ATE), Propanolol (PRO) and Carvedilol (CAR) in biological samples via a spectrophotometric approach. The analytes would be extracted from neutral aqueous solution into 1-dodecanol as an organic solvent, using AALLME. In this approach a low-density solvent with a melting point close to room temperature was applied as the extraction solvent. The emulsion was immediately formed by repeatedly pulling in and pushing out the aqueous sample solution and extraction solvent mixture via a 10-mL glass syringe for ten times. After centrifugation, the extractant droplet could be simply collected from the aqueous samples by solidifying the emulsion at a lower than the melting point temperature. In the next step, analytes were back extracted simultaneously into the acidic aqueous solution. Derringer and Suich multi-response optimization were utilized for simultaneous optimizing the parameters of three analytes. This method incorporates the benefits of AALLME and dispersive liquid-liquid microextraction considering the solidification of floating organic droplets (DLLME-SFOD). Calibration graphs under optimized conditions were linear in the range of 0.30-6.00, 0.32-2.00 and 0.30-1.40 μg mL- 1 for ATE, CAR and PRO, respectively. Other analytical parameters were obtained as follows: enrichment factors (EFs) were found to be 11.24, 16.55 and 14.90, and limits of detection (LODs) were determined to be 0.09, 0.10 and 0.08 μg mL- 1 for ATE, CAR and PRO, respectively. The proposed method will require neither a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly.
Farahmand, Farnaz; Ghasemzadeh, Bahar; Naseri, Abdolhossein
2018-01-05
An air assisted liquid-liquid microextraction by applying the solidification of a floating organic droplet method (AALLME-SFOD) coupled with a multivariate calibration method, namely partial least squares (PLS), was introduced for the fast and easy determination of Atenolol (ATE), Propanolol (PRO) and Carvedilol (CAR) in biological samples via a spectrophotometric approach. The analytes would be extracted from neutral aqueous solution into 1-dodecanol as an organic solvent, using AALLME. In this approach a low-density solvent with a melting point close to room temperature was applied as the extraction solvent. The emulsion was immediately formed by repeatedly pulling in and pushing out the aqueous sample solution and extraction solvent mixture via a 10-mL glass syringe for ten times. After centrifugation, the extractant droplet could be simply collected from the aqueous samples by solidifying the emulsion at a lower than the melting point temperature. In the next step, analytes were back extracted simultaneously into the acidic aqueous solution. Derringer and Suich multi-response optimization were utilized for simultaneous optimizing the parameters of three analytes. This method incorporates the benefits of AALLME and dispersive liquid-liquid microextraction considering the solidification of floating organic droplets (DLLME-SFOD). Calibration graphs under optimized conditions were linear in the range of 0.30-6.00, 0.32-2.00 and 0.30-1.40μg mL -1 for ATE, CAR and PRO, respectively. Other analytical parameters were obtained as follows: enrichment factors (EFs) were found to be 11.24, 16.55 and 14.90, and limits of detection (LODs) were determined to be 0.09, 0.10 and 0.08μg mL -1 for ATE, CAR and PRO, respectively. The proposed method will require neither a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly. Copyright © 2017 Elsevier B.V. All rights reserved.
First measurements with Argo flots in the Southern Baltic Sea
NASA Astrophysics Data System (ADS)
Walczowski, Waldemar; Goszczko, Ilona; Wieczorek, Piotr; Merchel, Malgorzata; Rak, Daniel
2017-04-01
The Argo programme is one of the most important elements of the ocean observing system. Currently almost 4000 Argo floats profile global oceans and deliver real time data. Originally Argo floats were developed for open ocean observations. Therefore a standard float can dive up to 2000 m and deep Argo floats are under development. However in the last years the shallow shelf seas become also interesting for Argo users. Institute of Oceanology Polish Academy of Sciences (IOPAN) participates in the Euro-Argo research infrastructure, the European contribution to Argo system. A legal and governance framework (Euro-Argo ERIC) was set up in May 2014. For a few years IOPAN has deployed floats mostly in the Nordic Seas and the European Arctic region. In the end of 2016 the first Polish Argo float was deployed in the Southern Baltic Sea. Building on the successful experience with Argo floats deployed by the Finnish oceanographers in the Bothnian Sea and Gotland Basin, the IOPAN float was launched in the Bornholm Deep during the fall cruise of IOPAN research vessel Oceania. The standard APEX float equipped with 2-way Iridium communication was used and different modes of operation, required for the specific conditions in the shallow and low saline Baltic Sea, were tested. Settings for the Baltic float are different than for the oceanic mode and were frequently changed during the mission to find the optimum solution. Changing the float parking depth during the mission allows for the limited control of the float drift direction. Results of a high resolution numerical forecast model for the Baltic Sea proved to be a valuable tool for determining the parking depth of the float in the different flow regimes. Trajectory and drift velocity of the Argo float deployed in the Southern Baltic depended strongly on the atmospheric forcing (in particular wind speed and direction), what was clearly manifested during the 'Axel' storm passing over the deployment area in January 2017. The first deployment showed clearly that Argo floats can be a useful tool for the Baltic Sea monitoring as the important element of a more complex, multidisciplinary observing system.
Roey, Steve
2006-01-01
In July 2003, the Accreditation Council for Graduate Medical Education (ACGME) instituted new resident work hour mandates, which are being shown to improve resident well-being and patient safety. However, there are limited data on the impact these new mandates may have on educational activities. To assess the impact on educational activities of a day float system created to meet ACGME work hour mandates. The inpatient ward coverage was changed by adding a day float team responsible for new patient admissions in the morning, with the on-call teams starting later and being responsible for new patient admissions thereafter. I surveyed the residents to assess the impact of this new system on educational activities-resident autonomy, attending teaching, conference attendance, resident teaching, self-directed learning, and ability to complete patient care responsibilities. There was no adverse effect of the day float system on educational activities. House staff reported increased autonomy, enhanced teaching from attending physicians, and improved ability to complete patient care responsibilities. Additionally, house staff demonstrated improved compliance with the ACGME mandates. The implementation of a novel day float system for the inpatient medicine ward service improved compliance with ACGME work duty requirements and did not adversely impact educational activities of the residency training program.
Taheri, Salman; Jalali, Fahimeh; Fattahi, Nazir; Jalili, Ronak; Bahrami, Gholamreza
2015-10-01
Dispersive liquid-liquid microextraction based on solidification of floating organic droplet was developed for the extraction of methadone and determination by high-performance liquid chromatography with UV detection. In this method, no microsyringe or fiber is required to support the organic microdrop due to the usage of an organic solvent with a low density and appropriate melting point. Furthermore, the extractant droplet can be collected easily by solidifying it at low temperature. 1-Undecanol and methanol were chosen as extraction and disperser solvents, respectively. Parameters that influence extraction efficiency, i.e. volumes of extracting and dispersing solvents, pH, and salt effect, were optimized by using response surface methodology. Under optimal conditions, enrichment factor for methadone was 134 and 160 in serum and urine samples, respectively. The limit of detection was 3.34 ng/mmL in serum and 1.67 ng/mL in urine samples. Compared with the traditional dispersive liquid-liquid microextraction, the proposed method obtained lower limit of detection. Moreover, the solidification of floating organic solvent facilitated the phase transfer. And most importantly, it avoided using high-density and toxic solvents of traditional dispersive liquid-liquid microextraction method. The proposed method was successfully applied to the determination of methadone in serum and urine samples of an addicted individual under methadone therapy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Evaluation of Spacecraft Pointing Requirements for Optically Linked Satellite Systems
NASA Astrophysics Data System (ADS)
Gunter, B. C.; Dahl, T.
2017-12-01
Free space optical (laser) communications can offer certain advantages for many remote sensing applications, due primarily to the high data rates (Gb/s) and energy efficiences possible from such systems. An orbiting network of crosslinked satellites could potentially relay imagery and other high-volume data at near real-time intervals. To achieve this would require satellites actively tracking one or more satellites, as well as ground terminals. The narrow laser beam width utilized by the transmitting satellites pose technical challenges due to the higher pointing accuracy required for effective signal transmission, in particular if small satellites are involved. To better understand what it would take to realize such a small-satellite laser communication network, this study investigates the pointing requirements needed to support optical data links. A general method for characterizing pointing tolerance, angle rates and accelerations for line of site vectors is devised and applied to various case studies. Comparisons with state-of-the-art small satellite attitude control systems are also made to assess what is possible using current technology. The results help refine the trade space for designs for optically linked networks from the hardware aboard each satellite to the design of the satellite constellation itself.
NDAS Hardware Translation Layer Development
NASA Technical Reports Server (NTRS)
Nazaretian, Ryan N.; Holladay, Wendy T.
2011-01-01
The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.
NASA Astrophysics Data System (ADS)
Ryu, Seong-Wan; Han, Jin-Woo; Kim, Chung-Jin; Kim, Sungho; Choi, Yang-Kyu
2009-03-01
This paper describes a unified memory (URAM) that utilizes a nanocrystal SOI MOSFET for multi-functional applications of both nonvolatile memory (NVM) and capacitorless 1T-DRAM. By using a discrete storage node (Ag nanocrystal) as the floating gate of the NVM, high defect immunity and 2-bit/cell operation were achieved. The embedded nanocrystal NVM also showed 1T-DRAM operation (program/erase time = 100 ns) characteristics, which were realized by storing holes in the floating body of the SOI MOSFET, without requiring an external capacitor. Three-bit/cell operation was accomplished for different applications - 2-bits for nonvolatility and 1-bit for fast operation.
NASA Astrophysics Data System (ADS)
Faigon, A.; Martinez Vazquez, I.; Carbonetto, S.; García Inza, M.; G
2017-01-01
A floating gate dosimeter was designed and fabricated in a standard CMOS technology. The design guides and characterization are presented. The characterization included the controlled charging by tunneling of the floating gate, and its discharging under irradiation while measuring the transistor drain current whose change is the measure of the absorbed dose. The resolution of the obtained device is close to 1 cGy satisfying the requirements for most radiation therapies dosimetry. Pending statistical proofs, the dosimeter is a potential candidate for wide in-vivo control of radiotherapy treatments.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
Unsteady aerodynamic analysis for offshore floating wind turbines under different wind conditions.
Xu, B F; Wang, T G; Yuan, Y; Cao, J F
2015-02-28
A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Unsteady aerodynamic analysis for offshore floating wind turbines under different wind conditions
Xu, B. F.; Wang, T. G.; Yuan, Y.; Cao, J. F.
2015-01-01
A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip. PMID:25583859
Floating liquid phase in sedimenting colloid-polymer mixtures.
Schmidt, Matthias; Dijkstra, Marjolein; Hansen, Jean-Pierre
2004-08-20
Density functional theory and computer simulation are used to investigate sedimentation equilibria of colloid-polymer mixtures within the Asakura-Oosawa-Vrij model of hard sphere colloids and ideal polymers. When the ratio of buoyant masses of the two species is comparable to the ratio of differences in density of the coexisting bulk (colloid) gas and liquid phases, a stable "floating liquid" phase is found, i.e., a thin layer of liquid sandwiched between upper and lower gas phases. The full phase diagram of the mixture under gravity shows coexistence of this floating liquid phase with a single gas phase or a phase involving liquid-gas equilibrium; the phase coexistence lines meet at a triple point. This scenario remains valid for general asymmetric binary mixtures undergoing bulk phase separation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-07
... for all Mobile Offshore Drilling Units and Floating Outer Continental Shelf Facilities (as defined in... Commander. Vessels requiring Coast Guard inspection include Mobile Offshore Drilling Units (MODUs), Floating... engage directly in oil and gas exploration or production in the offshore waters of the Eighth Coast Guard...
36 CFR § 327.30 - Shoreline Management on Civil Works Projects.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Development Areas for ski jumps, floats, boat moorage facilities, duck blinds, and other private floating recreation facilities when they will not create a safety hazard and inhibit public use or enjoyment of project waters or shoreline. A Corps permit is not required for temporary ice fishing shelters or duck...
17 CFR 50.4 - Classes of swaps required to be cleared.
Code of Federal Regulations, 2013 CFR
2013-04-01
... class Currency U.S. dollar (USD) Euro (EUR) Sterling (GBP) Yen (JPY). Floating Rate Indexes LIBOR... Amounts No No No No. Specification Basis swap class Currency U.S. dollar (USD) Euro (EUR) Sterling (GBP... Currency U.S. dollar (USD) Euro (EUR) Sterling (GBP) Yen (JPY). Floating Rate Indexes LIBOR EURIBOR LIBOR...
Electronic processing and control system with programmable hardware
NASA Technical Reports Server (NTRS)
Alkalaj, Leon (Inventor); Fang, Wai-Chi (Inventor); Newell, Michael A. (Inventor)
1998-01-01
A computer system with reprogrammable hardware allowing dynamically allocating hardware resources for different functions and adaptability for different processors and different operating platforms. All hardware resources are physically partitioned into system-user hardware and application-user hardware depending on the specific operation requirements. A reprogrammable interface preferably interconnects the system-user hardware and application-user hardware.
A hardware-in-the-loop simulation program for ground-based radar
NASA Astrophysics Data System (ADS)
Lam, Eric P.; Black, Dennis W.; Ebisu, Jason S.; Magallon, Julianna
2011-06-01
A radar system created using an embedded computer system needs testing. The way to test an embedded computer system is different from the debugging approaches used on desktop computers. One way to test a radar system is to feed it artificial inputs and analyze the outputs of the radar. More often, not all of the building blocks of the radar system are available to test. This will require the engineer to test parts of the radar system using a "black box" approach. A common way to test software code on a desktop simulation is to use breakpoints so that is pauses after each cycle through its calculations. The outputs are compared against the values that are expected. This requires the engineer to use valid test scenarios. We will present a hardware-in-the-loop simulator that allows the embedded system to think it is operating with real-world inputs and outputs. From the embedded system's point of view, it is operating in real-time. The hardware in the loop simulation is based on our Desktop PC Simulation (PCS) testbed. In the past, PCS was used for ground-based radars. This embedded simulation, called Embedded PCS, allows a rapid simulated evaluation of ground-based radar performance in a laboratory environment.
Zeeberg, Barry R; Riss, Joseph; Kane, David W; Bussey, Kimberly J; Uchio, Edward; Linehan, W Marston; Barrett, J Carl; Weinstein, John N
2004-01-01
Background When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. Results A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered. Conclusions Users of Excel for analyses involving gene names should be aware of this problem, which can cause genes, including medically important ones, to be lost from view and which has contaminated even carefully curated public databases. We provide work-arounds and scripts for circumventing the problem. PMID:15214961
Renormalization group procedure for potential -g/r2
NASA Astrophysics Data System (ADS)
Dawid, S. M.; Gonsior, R.; Kwapisz, J.; Serafin, K.; Tobolski, M.; Głazek, S. D.
2018-02-01
Schrödinger equation with potential - g /r2 exhibits a limit cycle, described in the literature in a broad range of contexts using various regularizations of the singularity at r = 0. Instead, we use the renormalization group transformation based on Gaussian elimination, from the Hamiltonian eigenvalue problem, of high momentum modes above a finite, floating cutoff scale. The procedure identifies a richer structure than the one we found in the literature. Namely, it directly yields an equation that determines the renormalized Hamiltonians as functions of the floating cutoff: solutions to this equation exhibit, in addition to the limit-cycle, also the asymptotic-freedom, triviality, and fixed-point behaviors, the latter in vicinity of infinitely many separate pairs of fixed points in different partial waves for different values of g.
Automated Activation and Deactivation of a System Under Test
NASA Technical Reports Server (NTRS)
Poff, Mark A.
2006-01-01
The MPLM Automated Activation/Deactivation application (MPLM means Multi-Purpose Logistic Module) was created with a three-fold purpose in mind: 1. To reduce the possibility of human error in issuing commands to, or interpreting telemetry from, the MPLM power, computer, and environmental control systems; 2. To reduce the amount of test time required for the repetitive activation/deactivation processes; and 3. To reduce the number of on-console personnel required for activation/ deactivation. All of these have been demonstrated with the release of the software. While some degree of automated end-item commanding had previously been performed for space-station hardware in the test environment, none approached the functionality and flexibility of this application. For MPLM activation, it provides mouse-click selection of the hardware complement to be activated, activates the desired hardware and verifies proper feedbacks, and alerts the user when telemetry indicates an error condition or manual intervention is required. For MPLM deactivation, the product senses which end items are active and deactivates them in the proper sequence. For historical purposes, an on-line log is maintained of commands issued and telemetry points monitored. The benefits of the MPLM Automated Activation/ Deactivation application were demonstrated with its first use in December 2002, when it flawlessly performed MPLM activation in 8 minutes (versus as much as 2.4 hours for previous manual activations), and performed MPLM deactivation in 3 minutes (versus 66 minutes for previous manual deactivations). The number of test team members required has dropped from eight to four, and in actuality the software can be operated by a sole (knowledgeable) system engineer.
Abort Options for Human Lunar Missions between Earth Orbit and Lunar Vicinity
NASA Technical Reports Server (NTRS)
Condon, Gerald L.; Senent, Juan S.; Llama, Eduardo Garcia
2005-01-01
Apollo mission design emphasized operational flexibility that supported premature return to Earth. However, that design was tailored to use expendable hardware for short expeditions to low-latitude sites and cannot be applied directly to an evolutionary program requiring long stay times at arbitrary sites. This work establishes abort performanc e requirements for representative onorbit phases of missions involvin g rendezvous in lunar-orbit, lunar-surface and at the Earth-Moon libr ation point. This study submits reference abort delta-V requirements and other Earth return data (e.g., entry speed, flight path angle) and also examines the effect of abort performance requirements on propul sive capability for selected vehicle configurations.
System overview on electromagnetic compensation for reflector antenna surface distortion
NASA Technical Reports Server (NTRS)
Acosta, R. J.; Zaman, A. J.; Terry, J. D.
1993-01-01
The system requirements and hardware implementation for electromagnetic compensation of antenna performance degradations due to thermal effects was investigated. Future commercial space communication antenna systems will utilize the 20/30 GHz frequency spectrum and support very narrow multiple beams (0.3 deg) over wide angle field of view (15-20 beamwidth). On the ground, portable and inexpensive very small aperture terminals (VSAT) for transmitting and receiving video, facsimile and data will be employed. These types of communication system puts a very stringent requirement on spacecraft antenna beam pointing stability (less than .01 deg), high gain (greater than 50 dB) and very lowside lobes (less than -25 dB). Thermal analysis performed on the advanced communication technology satellite (ACTS) has shown that the reflector surfaces, the mechanical supporting structures and metallic surfaces on the spacecraft body will distort due thermal effects from a varying solar flux. The antenna performance characteristics (e.g., pointing stability, gain, side lobe, etc.) will degrade due to thermal distortion in the reflector surface and supporting structures. Specifically, antenna RF radiation analysis has shown that pointing error is the most sensitive antenna performance parameter to thermal distortions. Other antenna parameters like peak gain, cross polarization level (beam isolation), and side lobe level will also degrade with thermal distortions. In order to restore pointing stability and in general antenna performance several compensation methods were proposed. In general these compensation methods can be classified as being either of mechanical or electromagnetic type. This paper will address only the later one. In this approach an adaptive phased array antenna feed is used to compensate for the antenna performance degradation. Extensive work has been devoted to demonstrate the feasibility of adaptive feed compensation on space communication antenna systems. This paper addresses the system requirements for such a system and identify candidate technologies (analog and digital) for possible hardware implementation.
Techniques of EMG signal analysis: detection, processing, classification and applications
Hussain, M.S.; Mohd-Yasin, F.
2006-01-01
Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694
Flexible Power Distribution Based on Point of Load Converters
NASA Astrophysics Data System (ADS)
Dhallewin, G.; Galiana, D.; Mollard, J. M.; Schaper, W.; Strixner, E.; Tonicello, F.; Triggianese, M.
2014-08-01
Present digital electronic loads require low voltages and suffer from high currents. In addition, they need several different voltage levels to supply the different parts of digital devices like the core, the input/output I/F, etc. Distributed Power Architectures (DPA) with point-of- load (POL) converters (synchronous buck type) offer excellent performance in term of efficiency and load step behaviour. They occupy little PCB area and are well suited for very low voltage (VLV) DC conversion (1V to 3.3V). The paper presents approaches to architectural design of POL based supplies including redundancy and protection as well as the requirements on a European hardware implementation. The main driver of the analysis is the flexibility of each element (DC/DC converter, protection, POL core) to cover a wide range of space applications.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., below a height of 4 inches measured from the lowest point in the boat where liquid can collect when the boat is in its static floating position, except engine rooms. Connected means allowing a flow of water... the engine room or a connected compartment below a height of 12 inches measured from the lowest point...
33 CFR 110.60 - Captain of the Port, New York.
Code of Federal Regulations, 2011 CFR
2011-07-01
... yachts and other recreational craft. A mooring buoy is permitted. (4) Manhattan, Fort Washington Point... special anchorage area is principally for use by yachts and other recreational craft. A temporary float or... shoreline to the point of origin. Note to paragraph (d)(5): The area will be principally for use by yachts...
NASA-STD-(I)-6016, Standard Materials and Processes Requirements for Spacecraft
NASA Technical Reports Server (NTRS)
Pedley, Michael; Griffin, Dennis
2006-01-01
This document is directed toward Materials and Processes (M&P) used in the design, fabrication, and testing of flight components for all NASA manned, unmanned, robotic, launch vehicle, lander, in-space and surface systems, and spacecraft program/project hardware elements. All flight hardware is covered by the M&P requirements of this document, including vendor designed, off-the-shelf, and vendor furnished items. Materials and processes used in interfacing ground support equipment (GSE); test equipment; hardware processing equipment; hardware packaging; and hardware shipment shall be controlled to prevent damage to or contamination of flight hardware.
Hardware development process for Human Research facility applications
NASA Astrophysics Data System (ADS)
Bauer, Liz
2000-01-01
The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .
Embedded algorithms within an FPGA-based system to process nonlinear time series data
NASA Astrophysics Data System (ADS)
Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.
2008-03-01
This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better computational and power efficiency.
Wavelet library for constrained devices
NASA Astrophysics Data System (ADS)
Ehlers, Johan Hendrik; Jassim, Sabah A.
2007-04-01
The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.
Kulkarni, Shruti R; Rajendran, Bipin
2018-07-01
We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
Geographic Resources Analysis Support System (GRASS) Version 4.0 User’s Reference Manual
1992-06-01
inpur-image need not be square; before processing, the X and Y dimensions of the input-image are padded with zeroes to the next highest power of two in...structures an input kowledge /control script with an appropriate combination of map layer category values (GRASS raster map layers that contain data on...F cos(x) cosine of x (x is in degrees) F exp(x) exponential function of x F exp(x,y) x to the power y F float(x) convert x to floating point F if
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
Profiles of inconsistent knowledge in children's pathways of conceptual change.
Schneider, Michael; Hardy, Ilonca
2013-09-01
Conceptual change requires learners to restructure parts of their conceptual knowledge base. Prior research has identified the fragmentation and the integration of knowledge as 2 important component processes of knowledge restructuring but remains unclear as to their relative importance and the time of their occurrence during development. Previous studies mostly were based on the categorization of answers in interview studies and led to mixed empirical results, suggesting that methodological improvements might be helpful. We assessed 161 third-graders' knowledge about floating and sinking of objects in liquids at 3 measurement points by means of multiple-choice tests. The tests assessed how strongly the children agreed with commonly found but mutually incompatible statements about floating and sinking. A latent profile transition analysis of the test scores revealed 5 profiles, some of which indicated the coexistence of inconsistent pieces of knowledge in learners. The majority of students (63%) were on 1 of 7 developmental pathways between these profiles. Thus, a child's knowledge profile at a point in time can be used to predict further development. The degree of knowledge integration decreased on some individual developmental paths, increased on others, and remained stable on still others. The study demonstrates the usefulness of explicit quantitative models of conceptual change. The results support a constructivist perspective on conceptual development, in which developmental changes of a learner's knowledge base result from idiosyncratic, yet systematic knowledge-construction processes. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Pazmino, A.; Bonnieux, S.; Joubert, C.; Gonzales, N.; Hello, Y.; Nolet, G.
2014-12-01
Mermaids have been developed to improve seismic data coverage in the oceanic domain for imaging of the Earth's interior. Though housed in conventional Argo-type floats, hardware and software was developed to analyze acoustic signals and determine whether an earthquake has been recorded, and whether the Mermaid should to come up to the surface and transmit to the satellite. In contrast to the passive Argo floats, Mermaids are essentially floating computers that decide for themselves what to do. After testing in the Mediterranean and Indian Ocean and improving the concept for more than a year, we recently started two fully scientific experiments using Mermaids. In cooperation with Inocar, we deployed a fleet of 10 Mermaids in May 2014 around the Galapagos islands from the LAE Sirius to study the suspected mantle plume beneath these islands. We are interested in plumes because we do not understand very well how the mantle has retained an almost constant temperature for three or four billion years, an essential condition for life to develop. The depth of mantle plumes is an important unknown, because it may tell us how well the lower mantle is able to transmit heat into the upper mantle. A second experiment is taking place in the Ligurian Sea. This basin opened with a rifting phase in late Oligocene. The rifting phase of the Ligurian basin is followed by the Corsica - Sardinia block counterclockwise rotation, but the deeper causes of this are still poorly understood. Three Mermaids are deployed, and re-deployed after drifting too far west, to augment the P arrivals observed for 6 months with 5 OBS's during the 2008 Grosmarin campaign. The experience obtained with this first generation of Mermaids has led to the development of a new multidisciplinary float (Multimermaid), which is programmable, able to carry up to 8 sensors to a depth of 3000 m, and with a duration of at least five years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, J M; Ehinger, M H; Joseph, C
1978-10-01
Development work on a computerized system for nuclear materials control and accounting in a nuclear fuel reprocessing plant is described and evaluated. Hardware and software were installed and tested to demonstrate key measurement, measurement control, and accounting requirements at accountability input/output points using natural uranium. The demonstration included a remote data acquisition system which interfaces process and special instrumentation to a cenral processing unit.
30 CFR 250.1002 - Design requirements for DOI pipelines.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of 0.72 for the submerged component and 0.60 for the riser component. E = Longitudinal joint factor... incorporated by reference in 30 CFR 250.198. (5) You must design pipeline risers for tension leg platforms and other floating platforms according to the design standards of API RP 2RD, Design of Risers for Floating...
30 CFR 250.1002 - Design requirements for DOI pipelines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of 0.72 for the submerged component and 0.60 for the riser component. E = Longitudinal joint factor... incorporated by reference in 30 CFR 250.198. (5) You must design pipeline risers for tension leg platforms and other floating platforms according to the design standards of API RP 2RD, Design of Risers for Floating...
30 CFR 250.1002 - Design requirements for DOI pipelines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of 0.72 for the submerged component and 0.60 for the riser component. E = Longitudinal joint factor... incorporated by reference in 30 CFR 250.198. (5) You must design pipeline risers for tension leg platforms and other floating platforms according to the design standards of API RP 2RD, Design of Risers for Floating...
NASA Astrophysics Data System (ADS)
Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2016-09-01
Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.
Motion performance and mooring system of a floating offshore wind turbine
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhang, Liang; Wu, Haitao
2012-09-01
The development of offshore wind farms was originally carried out in shallow water areas with fixed (seabed mounted) structures. However, countries with limited shallow water areas require innovative floating platforms to deploy wind turbines offshore in order to harness wind energy to generate electricity in deep seas. The performances of motion and mooring system dynamics are vital to designing a cost effective and durable floating platform. This paper describes a numerical model to simulate dynamic behavior of a new semi-submersible type floating offshore wind turbine (FOWT) system. The wind turbine was modeled as a wind block with a certain thrust coefficient, and the hydrodynamics and mooring system dynamics of the platform were calculated by SESAM software. The effect of change in environmental conditions on the dynamic response of the system under wave and wind loading was examined. The results indicate that the semi-submersible concept has excellent performance and SESAM could be an effective tool for floating wind turbine design and analysis.
Time and Energy, Exploring Trajectory Options Between Nodes in Earth-Moon Space
NASA Technical Reports Server (NTRS)
Martinez, Roland; Condon, Gerald; Williams, Jacob
2012-01-01
The Global Exploration Roadmap (GER) was released by the International Space Exploration Coordination Group (ISECG) in September of 2011. It describes mission scenarios that begin with the International Space Station and utilize it to demonstrate necessary technologies and capabilities prior to deployment of systems into Earth-Moon space. Deployment of these systems is an intermediate step in preparation for more complex deep space missions to near-Earth asteroids and eventually Mars. In one of the scenarios described in the GER, "Asteroid Next", there are activities that occur in Earth-Moon space at one of the Earth-Moon Lagrange (libration) points. In this regard, the authors examine the possible role of an intermediate staging point in an effort to illuminate potential trajectory options for conducting missions in Earth-Moon space of increasing duration, ultimately leading to deep space missions. This paper will describe several options for transits between Low Earth Orbit (LEO) and the libration points, transits between libration points, and transits between the libration points and interplanetary trajectories. The solution space provided will be constrained by selected orbital mechanics design techniques and physical characteristics of hardware to be used in both crewed missions and uncrewed missions. The relationships between time and energy required to transfer hardware between these locations will provide a better understanding of the potential trade-offs mission planners could consider in the development of capabilities, individual missions, and mission series in the context of the ISECG GER.
AmeriFlux US-WPT Winous Point North Marsh
Chen, Jiquan [University of Toledo / Michigan State University
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-WPT Winous Point North Marsh. Site Description - The marsh site has been owned by the Winous Point Shooting Club since 1856 and has been managed by wildlife biologists since 1946. The hydrology of the marsh is relatively isolated by the surrounding dikes and drainages and only receives drainage from nearby croplands through three connecting ditches. Since 2001, the marsh has been managed to maintain year-round inundation with the lowest water levels in September. Within the 0–250 m fetch of the tower, the marsh comprises 42.9% of floating-leaved vegetation, 52.7% of emergent vegetation, and 4.4% of dike and upland during the growing season. Dominant emergent plants include narrow-leaved cattail (Typha angustifolia), rose mallow (Hibiscus moscheutos), and bur reed (Sparganium americanum). Common floating-leaved species are water lily (Nymphaea odorata) and American lotus (Nelumbo lutea) with foliage usually covering the water surface from late May to early October.
Kator, H; Rhodes, M
2001-06-01
Declining oyster (Crassostrea virginica) production in the Chesapeake Bay has stimulated aquaculture based on floats for off-bottom culture. While advantages of off-bottom culture are significant, the increased use of floating containers raises public health and microbiological concerns, because oysters in floats may be more susceptible to fecal contamination from storm runoff compared to those cultured on-bottom. We conducted four commercial-scale studies with market-size oysters naturally contaminated with fecal coliforms (FC) and a candidate viral indicator, F-specific RNA (FRNA) coliphage. To facilitate sampling and to test for location effects, 12 replicate subsamples, each consisting of 15 to 20 randomly selected oysters in plastic mesh bags, were placed at four characteristic locations within a 0.6- by 3.0-m "Taylor" float, and the remaining oysters were added to a depth not exceeding 15.2 cm. The float containing approximately 3,000 oysters was relaid in the York River, Virginia, for 14 days. During relay, increases in shellfish FC densities followed rain events such that final mean levels exceeded initial levels or did not meet an arbitrary product end point of 50 FC/100 ml. FRNA coliphage densities decreased to undetectable levels within 14 days (16 to 28 degrees C) in all but the last experiment, when temperatures fell between 12 and 16 degrees C. Friedman (nonparametric analysis of variance) tests performed on FC/Escherichia coli and FRNA densities indicated no differences in counts as a function of location within the float. The public health consequences of these observations are discussed, and future research and educational needs are identified.
Characteristics of a Single Float Seaplane During Take-off
NASA Technical Reports Server (NTRS)
Crowley, J W , Jr; Ronan, K M
1925-01-01
At the request of the Bureau of Aeronautics, Navy Department, the National Advisory Committee for Aeronautics at Langley Field is investigating the get-away characteristics of an N-9H, a DT-2, and an F-5l, as representing, respectively, a single float, a double float, and a boat type of seaplane. This report covers the investigation conducted on the N-9H. The results show that a single float seaplane trims aft in taking off. Until a planing condition is reached the angle of attack is about 15 degrees and is only slightly affected by controls. When planing it seeks a lower angle, but is controllable through a widening range, until at the take-off it is possible to obtain angles of 8 degrees to 15 degrees with corresponding speeds of 53 to 41 M. P. H. or about 40 per cent of the speed range. The point of greatest resistance occurs at about the highest angle of a pontoon planing angle of 9 1/2 degrees and at a water speed of 24 M. P. H.
NASA Astrophysics Data System (ADS)
Chen, Xin; Sánchez-Arriaga, Gonzalo
2018-02-01
To model the sheath structure around an emissive probe with cylindrical geometry, the Orbital-Motion theory takes advantage of three conserved quantities (distribution function, transverse energy, and angular momentum) to transform the stationary Vlasov-Poisson system into a single integro-differential equation. For a stationary collisionless unmagnetized plasma, this equation describes self-consistently the probe characteristics. By solving such an equation numerically, parametric analyses for the current-voltage (IV) and floating-potential (FP) characteristics can be performed, which show that: (a) for strong emission, the space-charge effects increase with probe radius; (b) the probe can float at a positive potential relative to the plasma; (c) a smaller probe radius is preferred for the FP method to determine the plasma potential; (d) the work function of the emitting material and the plasma-ion properties do not influence the reliability of the floating-potential method. Analytical analysis demonstrates that the inflection point of an IV curve for non-emitting probes occurs at the plasma potential. The flat potential is not a self-consistent solution for emissive probes.
Gulp: An Imaginatively Different Approach to Learning about Water.
ERIC Educational Resources Information Center
Baird, Colette
1997-01-01
Provides details of performances by the Floating Point Science Theater working with elementary school children about the characteristics of water. Discusses student reactions to various parts of the performances. (DDR)
Code of Federal Regulations, 2010 CFR
2010-07-01
..., community or corporate docks, or at any fixed or permanent mooring point, may only be used for overnight... floating or stationary mooring facilities on, adjacent to, or interfering with a buoy, channel marker or...
Software For Tie-Point Registration Of SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice
1995-01-01
SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.
Lee, Lydia C; Charlton, Timothy P; Thordarson, David B
2013-12-01
A floating toe deformity occurs in many patients who undergo Weil osteotomies. It is likely caused by the failure of the windlass mechanism in shortening the metatarsal. For patients who require a proximal interphalangeal (PIP) joint arthroplasty or fusion in addition to a Weil osteotomy, the transfer of the flexor digitorum brevis (FDB) tendon to the PIP joint might restore the windlass mechanism and decrease the incidence of floating toes. Fourteen cadaveric foot specimens were examined to determine the effects of changing metatarsal length as well as tensioning the FDB tendon on the angle of the metatarsophalangeal (MTP) joint as a measure of a floating toe. Shortening and lengthening the second metatarsal resulted in a significant change in MTP angle (P = .03 and .02, respectively), though there was no clear relationship found between the amount of change in metatarsal length and the change in MTP angle. Transferring the FDB to a PIP arthroplasty site plantarflexed the MTP joint and corrected floating toes; the change in angle was significant compared with the control and shortening groups (P = .0001 and .002, respectively). This study supports the theory that change in length of the metatarsal, possibly via the windlass mechanism, plays a role in the pathophysiology of the floating toe deformity. Tensioning and transferring the FDB tendon into the PIP joint helped prevent the floating toe deformity in this cadaveric model. Continued research in this subject will help to refine methods of prevention and correction of the floating toe deformity.
Gunjal, P. T.; Shinde, M. B.; Gharge, V. S.; Pimple, S. V.; Gurjar, M. K.; Shah, M. N.
2015-01-01
The objective of this present investigation was to develop and formulate floating sustained release matrix tablets of s (-) atenolol, by using different polymer combinations and filler, to optimize by using surface response methodology for different drug release variables and to evaluate the drug release pattern of the optimized product. Floating sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: Hydroxypropyl methylcellulose, sodium bicarbonate as a gas generating agent, polyvinyl pyrrolidone as a binder and lactose monohydrate as filler. The 32 full factorial design was employed to investigate the effect of formulation variables on different properties of tablets applicable to floating lag time, buoyancy time, % drug release in 1 and 6 h (D1 h,D6 h) and time required to 90% drug release (t90%). Significance of result was analyzed using analysis of non variance and P < 0.05 was considered statistically significant. S (-) atenolol floating sustained release matrix tablets followed the Higuchi drug release kinetics that indicates the release of drug follows anomalous (non-Fickian) diffusion mechanism. The developed floating sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet. PMID:26798171
Gunjal, P T; Shinde, M B; Gharge, V S; Pimple, S V; Gurjar, M K; Shah, M N
2015-01-01
The objective of this present investigation was to develop and formulate floating sustained release matrix tablets of s (-) atenolol, by using different polymer combinations and filler, to optimize by using surface response methodology for different drug release variables and to evaluate the drug release pattern of the optimized product. Floating sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: Hydroxypropyl methylcellulose, sodium bicarbonate as a gas generating agent, polyvinyl pyrrolidone as a binder and lactose monohydrate as filler. The 3(2) full factorial design was employed to investigate the effect of formulation variables on different properties of tablets applicable to floating lag time, buoyancy time, % drug release in 1 and 6 h (D1 h,D6 h) and time required to 90% drug release (t90%). Significance of result was analyzed using analysis of non variance and P < 0.05 was considered statistically significant. S (-) atenolol floating sustained release matrix tablets followed the Higuchi drug release kinetics that indicates the release of drug follows anomalous (non-Fickian) diffusion mechanism. The developed floating sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
Investigating the potential of floating mires as record of palaeoenvironmental changes
NASA Astrophysics Data System (ADS)
Zaccone, C.; Adamo, P.; Giordano, S.; Miano, T. M.
2012-04-01
Peat-forming floating mires could provide an exceptional resource for palaeoenvironmental and environmental monitoring studies, as much of their own history, as well as the history of their surrounds, is recorded in their peat deposits. In his Naturalis historia (AD 77-79), Pliny the Elder described floating islands on Lake Vadimonis (now Posta Fibreno Lake, Italy). Actually, a small floating island (ca. 35 m of diameter and 3 m of submerged thickness) still occurs on this calcareous lake fed by karstic springs at the base of the Apennine Mountains. Here the southernmost Italian populations of Sphagnum palustre occur on the small surface of this floating mire known as "La Rota", i.e., a cup-formed core of Sphagnum peat and rhizomes of Helophytes, erratically floating on the water-body of a submerged doline, annexed to the easternmost edge of the lake, characterised by the extension of a large reed bed. Geological evidence point out the existence in the area of a large lacustrine basin since Late Pleistocene. The progressive filling of the lake caused by changing in climatic conditions and neotectonic events, brought about the formation of peat deposits in the area, following different depositional cycles in a swampy environment. Then, a round-shaped portion of fen, originated around lake margins in waterlogged areas, was somehow isolated from the bank and started to float. Coupling data about concentrations and fluxes of several major and trace elements of different origin (i.e., dust particles, volcanic emissions, cosmogenic dusts and marine aerosols), with climate records (plant micro- and macrofossils, pollens, isotopic ratios), biomolecular records (e.g., lipids), detailed age-depth modelling (i.e., 210Pb, 137Cs, 14C), and humification indexes, the present work is hoped to identify and better understand the reliability of this particular "archive", and thus possible relationships between biogeochemical processes occurring in this floating bog and environmental changes.
Galoian, V R
1988-01-01
It is well known that the eye is a phylogenetically stabilized body with rotation properties. The eye has an elastic cover and is filled with uniform fluid. According to the theory of covers and other concepts on the configuration of turning fluid mass we concluded that the eyeball has an elliptic configuration. Classification of the eyeball is here presented with simultaneous studies of the principles of the eye situation. The parallelism between the state and different types of heterophory and orthophory was studied. To determine normal configuration it is necessary to have in mind some principles of achieving advisable correct situation of the eye in orbit. We determined the centre of the eye rotation and showed that it is impossible to situate it out of the geometrical centre of the eyeball. It was pointed out that for adequate perception the rotation centre must be situated on the visual axis. Using the well known theory of floating we experimentally determined that the centre of the eye rotation lies on the level of the floating eye, just on the point of cross of the visual line with the optical axis. It was shown experimentally on the basis of recording the eye movements in the process of eyelid closing that weakening of the eye movements is of gravitational pattern and proceeds under the action of stability forces, which directly indicates the floating state of the eye. For the first time using the model of the floating eye it was possible to show the formation of extraeye vacuum by straining the back wall. This effect can be obtained without any difficulty, if the face is turned down. The role of negative pressure in the formation of the eye ametropy, as well as new conclusions and prognostications about this new model are discussed.
Functional outcomes of "floating elbow" injuries in adult patients.
Yokoyama, K; Itoman, M; Kobayashi, A; Shindo, M; Futami, T
1998-05-01
To assess elbow function, complications, and problems of floating elbow fractures in adults receiving surgical treatment. Retrospective clinical review. Level I trauma center in Kanagawa, Japan. Fourteen patients with fifteen floating elbow injuries, excluding one immediate amputation, seen at the Kitasato University Hospital from January 1, 1984, to April 30, 1995. All fractures were managed surgically by various methods. In ten cases, the humeral and forearm fractures were treated simultaneously with immediate fixation. In three cases, both the humeral and forearm fractures were treated with delayed fixation on Day 1, 4, or 7. In the remaining two cases, the open forearm fracture was managed with immediate fixation and the humerus fracture with delayed fixation on Day 10 or 25. All subjects underwent standardized elbow evaluations, and results were compared with an elbow score based on a 100-point scale. The parameters evaluated were pain, motion, elbow and grip strength, and function during daily activities. Complications such as infections, nonunions, malunions, and refractures were investigated. Mean follow-up was forty-three months (range 13 to 112 months). At final follow-up, the mean elbow function score was 79 points, with 67 percent (ten of fifteen) of the subjects having good or excellent results. The functional outcome did not correlate with the Injury Severity Score of the individual patients, the existence of open injuries or neurovascular injuries, or the timing of surgery. There were one deep infection, two nonunions of the humerus, two nonunions of the forearm, one varus deformity of the humerus, and one forearm refracture. Based on the present data, we could not clarify the factors influencing the final functional outcome after floating elbow injury. These injuries, however, potentially have many complications, such as infection or nonunion, especially when there is associated brachial plexus injury. We consider that floating elbow injuries are severe injuries and that surgical stabilization is needed; beyond that, there are no specific forms of surgical treatment to reliably guarantee excellent results.
NASA Astrophysics Data System (ADS)
Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert
2018-02-01
We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2014 CFR
2014-07-01
... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2013 CFR
2013-07-01
... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...
... weight normally for the first month. After that point, the baby will lose weight and become irritable, and will have worsening jaundice. Other symptoms may include: Dark urine Enlarged spleen Floating stools Foul-smelling stools Pale or clay-colored ...
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2012 CFR
2012-07-01
... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...
NASA Technical Reports Server (NTRS)
Kelly, G. L.; Berthold, G.; Abbott, L.
1982-01-01
A 5 MHZ single-board microprocessor system which incorporates an 8086 CPU and an 8087 Numeric Data Processor is used to implement the control laws for the NASA Drones for Aerodynamic and Structural Testing, Aeroelastic Research Wing II. The control laws program was executed in 7.02 msec, with initialization consuming 2.65 msec and the control law loop 4.38 msec. The software emulator execution times for these two tasks were 36.67 and 61.18, respectively, for a total of 97.68 msec. The space, weight and cost reductions achieved in the present, aircraft control application of this combination of a 16-bit microprocessor with an 80-bit floating point coprocessor may be obtainable in other real time control applications.
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
NASA Astrophysics Data System (ADS)
Drabik, Timothy J.; Lee, Sing H.
1986-11-01
The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
Xu, Hui; Ding, Zongqing; Lv, Lili; Song, Dandan; Feng, Yu-Qi
2009-03-16
A new dispersive liquid-liquid microextraction based on solidification of floating organic droplet method (DLLME-SFO) was developed for the determination of five kinds of polycyclic aromatic hydrocarbons (PAHs) in environmental water samples. In this method, no specific holder, such as the needle tip of microsyringe and the hollow fiber, is required for supporting the organic microdrop due to the using of organic solvent with low density and proper melting point. Furthermore, the extractant droplet can be collected easily by solidifying it in the lower temperature. 1-Dodecanol was chosen as extraction solvent in this work. A series of parameters that influence extraction were investigated systematically. Under optimal conditions, enrichment factors (EFs) for PAHs were in the range of 88-118. The limit of detections (LODs) for naphthalene, diphenyl, acenaphthene, anthracene and fluoranthene were 0.045, 0.86, 0.071, 1.1 and 0.66ngmL(-1), respectively. Good reproducibility and recovery of the method were also obtained. Compared with the traditional liquid-phase microextraction (LPME) and dispersive liquid-liquid microextraction (DLLME) methods, the proposed method obtained about 2 times higher enrichment factor than those in LPME. Moreover, the solidification of floating organic solvent facilitated the phase transfer. And most importantly, it avoided using high-density and toxic solvent in the traditional DLLME method. The proposed method was successfully applied to determinate PAHs in the environmental water samples. The simple and low-cost method provides an alternative method for the analysis of non-polar compounds in complex environmental water.
A Comparative Study of Point Cloud Data Collection and Processing
NASA Astrophysics Data System (ADS)
Pippin, J. E.; Matheney, M.; Gentle, J. N., Jr.; Pierce, S. A.; Fuentes-Pineda, G.
2016-12-01
Over the past decade, there has been dramatic growth in the acquisition of publicly funded high-resolution topographic data for scientific, environmental, engineering and planning purposes. These data sets are valuable for applications of interest across a large and varied user community. However, because of the large volumes of data produced by high-resolution mapping technologies and expense of aerial data collection, it is often difficult to collect and distribute these datasets. Furthermore, the data can be technically challenging to process, requiring software and computing resources not readily available to many users. This study presents a comparison of advanced computing hardware and software that is used to collect and process point cloud datasets, such as LIDAR scans. Activities included implementation and testing of open source libraries and applications for point cloud data processing such as, Meshlab, Blender, PDAL, and PCL. Additionally, a suite of commercial scale applications, Skanect and Cloudcompare, were applied to raw datasets. Handheld hardware solutions, a Structure Scanner and Xbox 360 Kinect V1, were tested for their ability to scan at three field locations. The resultant data projects successfully scanned and processed subsurface karst features ranging from small stalactites to large rooms, as well as a surface waterfall feature. Outcomes support the feasibility of rapid sensing in 3D at field scales.
Steering and positioning targets for HWIL IR testing at cryogenic conditions
NASA Astrophysics Data System (ADS)
Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.
2006-05-01
In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.
VIEW OF FACILITY NO. S 20 NEAR THE POINT WHERE ...
VIEW OF FACILITY NO. S 20 NEAR THE POINT WHERE IT JOINS FACILITY NO. S 21. NOTE THE ASPHALT-FILLED NARROW-GAUGE TRACKWAY WITH SOME AREAS OF STEEL TRACK SHOWING. VIEW FACING NORTHEAST - U.S. Naval Base, Pearl Harbor, Floating Dry Dock Quay, Hurt Avenue at northwest side of Magazine Loch, Pearl City, Honolulu County, HI
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY Offshore... 40 CFR 125.30-32, any existing point source subject to this subpart must achieve the following... Minimum of 1 mg/l and maintained as close to this concentration as possible. Sanitary M91M Floating solids...
33 CFR 183.558 - Hoses and connections.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: (A) The hose is severed at the point where maximum drainage of fuel would occur, (B) The boat is in its static floating position, and (C) The fuel system is filled to the capacity market on the tank... minutes when: (A) The hose is severed at the point where maximum drainage of fuel would occur, (B) The...
33 CFR 146.104 - Safety and Security notice of arrival for foreign floating facilities.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the NVMC's Web site at http://www.nvmc.uscg.gov/. (c) Updates to a submitted NOA. Unless otherwise... owner or operator of the foreign floating facility must revise and re-submit the NOA within the times required in paragraph (e) of this section. An owner or operator does not need to revise or re-submit an NOA...
33 CFR 146.104 - Safety and Security notice of arrival for foreign floating facilities.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the NVMC's Web site at http://www.nvmc.uscg.gov/. (c) Updates to a submitted NOA. Unless otherwise... owner or operator of the foreign floating facility must revise and re-submit the NOA within the times required in paragraph (e) of this section. An owner or operator does not need to revise or re-submit an NOA...
33 CFR 146.104 - Safety and Security notice of arrival for foreign floating facilities.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the NVMC's Web site at http://www.nvmc.uscg.gov/. (c) Updates to a submitted NOA. Unless otherwise... owner or operator of the foreign floating facility must revise and re-submit the NOA within the times required in paragraph (e) of this section. An owner or operator does not need to revise or re-submit an NOA...
33 CFR 146.104 - Safety and Security notice of arrival for foreign floating facilities.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the NVMC's Web site at http://www.nvmc.uscg.gov/. (c) Updates to a submitted NOA. Unless otherwise... owner or operator of the foreign floating facility must revise and re-submit the NOA within the times required in paragraph (e) of this section. An owner or operator does not need to revise or re-submit an NOA...
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Korn, A. O.
1975-01-01
In the late 1960's several governmental agencies sponsored efforts to develop unmanned, powered balloon systems for scientific experimentation and military operations. Some of the programs resulted in hardware and limited flight tests; others, to date, have not progressed beyond the paper study stage. Balloon system designs, materials, propulsion units and capabilities are briefly described, and critical problem areas are pointed out which require further study in order to achieve operational powered balloon systems capable of long duration flight at high altitudes.
NASA Astrophysics Data System (ADS)
Irving, D. H.; Rasheed, M.; O'Doherty, N.
2010-12-01
The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.