Sample records for decimation filter design

  1. On the application of under-decimated filter banks

    NASA Technical Reports Server (NTRS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-01-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.

  2. On the application of under-decimated filter banks

    NASA Astrophysics Data System (ADS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-11-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.

  3. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  4. Optimal sharpening of compensated comb decimation filters: analysis and design.

    PubMed

    Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.

  5. Novel Digital Signal Processing and Detection Techniques.

    DTIC Science & Technology

    1980-09-01

    decimation and interpolation [11, 1 2]. * Submitted by: Bede Liu Department of Electrical .l Engineering and Computer Science Princeton University ...on the use of recursive filters for decimation and interpolation. 4- UNCL.ASSIFIED~ SECURITY CLASSIFICATION OF PAGEfW1,en Data Fneprd) ...filter structure for realizing low-pass filter is developed 16,7]. By employing decimation and interpolation, the filter uses only coefficients 0, +1, and

  6. Digital Filter ASIC for NASA Deep Space Radio Science

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.

    1995-01-01

    This paper is about the implementation of an 80 MHz, 16-bit, multi-stage digital filter to decimate by 1600, providing a 50 kHz output with bandpass ripple of less than +/-0.1 dB. The chip uses two decimation by five units and six decimations by two executed by a single decimation by two units. The six decimations by two consist of six halfband filters, five having 30-taps and one having 51-taps. Use of a 16x16 register file for the digital delay lines enables implementation in the Vitesse 350K gate array.

  7. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

  8. Application of DFT Filter Banks and Cosine Modulated Filter Banks in Filtering

    NASA Technical Reports Server (NTRS)

    Lin, Yuan-Pei; Vaidyanathan, P. P.

    1994-01-01

    None given. This is a proposal for a paper to be presented at APCCAS '94 in Taipei, Taiwan. (From outline): This work is organized as follows: Sec. II is devoted to the construction of the new 2m channel under-decimated DFT filter bank. Implementation and complexity of this DFT filter bank are discussed therein. IN a similar manner, the new 2m channel cosine modulated filter bank is discussed in Sec. III. Design examples are given in Sec. IV.

  9. Surface smoothing, decimation, and their effects on 3D biological specimens.

    PubMed

    Veneziano, Alessio; Landi, Federica; Profico, Antonio

    2018-06-01

    Smoothing and decimation filters are commonly used to restore the realistic appearance of virtual biological specimens, but they can cause a loss of topological information of unknown extent. In this study, we analyzed the effect of smoothing and decimation on a 3D mesh to highlight the consequences of an inappropriate use of these filters. Topological noise was simulated on four anatomical regions of the virtual reconstruction of an orangutan cranium. Sequential levels of smoothing and decimation were applied, and their effects were analyzed on the overall topology of the 3D mesh and on linear and volumetric measurements. Different smoothing algorithms affected mesh topology and measurements differently, although the influence on the latter was generally low. Decimation always produced detrimental effects on both topology and measurements. The application of smoothing and decimation, both separate and combined, is capable of recovering topological information. Based on the results, objective guidelines are provided to minimize information loss when using smoothing and decimation on 3D meshes. © 2018 Wiley Periodicals, Inc.

  10. The analysis of decimation and interpolation in the linear canonical transform domain.

    PubMed

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  11. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  12. Wireless sensor platform for harsh environments

    NASA Technical Reports Server (NTRS)

    Garverick, Steven L. (Inventor); Yu, Xinyu (Inventor); Toygur, Lemi (Inventor); He, Yunli (Inventor)

    2009-01-01

    Reliable and efficient sensing becomes increasingly difficult in harsher environments. A sensing module for high-temperature conditions utilizes a digital, rather than analog, implementation on a wireless platform to achieve good quality data transmission. The module comprises a sensor, integrated circuit, and antenna. The integrated circuit includes an amplifier, A/D converter, decimation filter, and digital transmitter. To operate, an analog signal is received by the sensor, amplified by the amplifier, converted into a digital signal by the A/D converter, filtered by the decimation filter to address the quantization error, and output in digital format by the digital transmitter and antenna.

  13. CMOS Bit-Stream Band-Pass Beamforming

    DTIC Science & Technology

    2016-03-31

    unlimited. with direct IF sampling, most of the signal processing, including digital down-conversion ( DDC ), is carried out in the digital domain, and I/Q...level digitized signals are directly processed without decimation filtering for I/Q DDC and phase shifting. This novel BSP approach replaces bulky...positive feedback. The resonator center frequency of fs/4 (260MHz) simplifies the design of DDC . 4b tunable capacitors adjust the center frequency

  14. Visual function improvement using photocromic and selective blue-violet light filtering spectacle lenses in patients affected by retinal diseases.

    PubMed

    Colombo, L; Melardi, E; Ferri, P; Montesano, G; Samir Attaalla, S; Patelli, F; De Cillà, S; Savaresi, G; Rossetti, L

    2017-08-22

    To evaluate functional visual parameters using photocromic and selective blue-violet light filtering spectacle lenses in patients affected by central or peripheral scotoma due to retinal diseases. Sixty patients were enrolled in this study: 30 patients affected by central scotoma, group 1, and 30 affected by peripheral scotoma, group 2. Black on White Best Corrected Visual Acuity (BW-BCVA), White on Black Best Corrected Visual Acuity (WB-BCVA), Mars Contrast Sensitivity (CS) and a Glare Test (GT) were performed to all patients. Test results with blue-violet filter, a short-pass yellow filter and with no filters were compared. All scores from test results increased significantly with blue-violet filters for all patients. The mean BW-BCVA increased from 0.30 ± 0.20 to 0.36 ± 0.21 decimals in group 1 and from 0.44 ± 0.22 to 0.51 ± 0.23 decimals in group 2 (Mean ± SD, p < 0.0001 in both cases). The mean WB-BCVA increased from 0.31 ± 0.19 to 0.38 ± 0.23 decimals in group 1 and from 0.46 ± 0.20 to 0.56 ± 0.22 decimals in group 2 (Mean ± SD, p < 0.0001 in both cases). The letter count for the CS test increased from 26.7 ± 7.9 to 30.06 ± 7.8 in group 1 (Mean ± SD, p = 0.0005) and from 31.5 ± 7.6 to 33.72 ± 7.3 in group 2 (Mean ± SD, p = 0.031). GT was significantly reduced: the letter count increased from 20.93 ± 5.42 to 22.82 ± 4.93 in group 1 (Mean ± SD, p < 0.0001) and from 24.15 ± 5.5 to 25.97 ± 4.7 in group 2 (Mean ± SD, p < 0.0001). Higher scores were recorded with the Blue filter compared to Yellow filter in all tests (p < 0.05). No significant differences in any test results could be detected between the Yellow filter and the No filter condition. The use of a combination of photocromic lens with a selective blue-violet light filter showed functional benefit in all evaluated patients.

  15. A decimal carry-free adder

    NASA Astrophysics Data System (ADS)

    Nikmehr, Hooman; Phillips, Braden; Lim, Cheng-Chew

    2005-02-01

    Recently, decimal arithmetic has become attractive in the financial and commercial world including banking, tax calculation, currency conversion, insurance and accounting. Although computers are still carrying out decimal calculation using software libraries and binary floating-point numbers, it is likely that in the near future, all processors will be equipped with units performing decimal operations directly on decimal operands. One critical building block for some complex decimal operations is the decimal carry-free adder. This paper discusses the mathematical framework of the addition, introduces a new signed-digit format for representing decimal numbers and presents an efficient architectural implementation. Delay estimation analysis shows that the adder offers improved performance over earlier designs.

  16. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  17. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  18. High-Speed Rapid-Single-Flux-Quantum Multiplexer and Demultiplexer Design and Testing

    DTIC Science & Technology

    2007-08-22

    Herr, N. Vukovic , C. A. Mancini, M. F. Bocko, and M. J . Feldman, "High speed testing of a four-bit RSFQ decimation digital filter," IEEE Trans. Appl...61] A. M. Herr, C. A. Mancini, N. Vukovic , M. F. Bocko, and M. J . Feldman, "High-speed operation of a 64-bit circular shift register," IEEE Trans...10-19 J . A rich library of basic cells such as flip-flops, buffers, adders, multipliers, clock generator circuits, and phase-locking circuits have been

  19. A deterministic compressive sensing model for bat biosonar.

    PubMed

    Hague, David A; Buck, John R; Bilik, Igal

    2012-12-01

    The big brown bat (Eptesicus fuscus) uses frequency modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects in clutter and noise. They resolve glints spaced down to 2 μs in time delay which surpasses what traditional signal processing techniques can achieve using the same echolocation call. The Matched Filter (MF) attains 10-12 μs resolution while the Inverse Filter (IF) achieves higher resolution at the cost of significantly degraded detection performance. Recent work by Fontaine and Peremans [J. Acoustic. Soc. Am. 125, 3052-3059 (2009)] demonstrated that a sparse representation of bat echolocation calls coupled with a decimating sensing method facilitates distinguishing closely spaced objects over realistic SNRs. Their work raises the intriguing question of whether sensing approaches structured more like a mammalian auditory system contains the necessary information for the hyper-resolution observed in behavioral tests. This research estimates sparse echo signatures using a gammatone filterbank decimation sensing method which loosely models the processing of the bat's auditory system. The decimated filterbank outputs are processed with [script-l](1) minimization. Simulations demonstrate that this model maintains higher resolution than the MF and significantly better detection performance than the IF for SNRs of 5-45 dB while undersampling the return signal by a factor of six.

  20. Teaching Students with Cognitive Impairment Chained Mathematical Task of Decimal Subtraction Using Simultaneous Prompting

    ERIC Educational Resources Information Center

    Rao, Shaila; Kane, Martha T.

    2009-01-01

    This study assessed effectiveness of simultaneous prompting procedure in teaching two middle school students with cognitive impairment decimal subtraction using regrouping. A multiple baseline, multiple probe design replicated across subjects successfully taught two students with cognitive impairment at middle school level decimal subtraction…

  1. Understanding decimal numbers: a foundation for correct calculations.

    PubMed

    Pierce, Robyn U; Steinle, Vicki A; Stacey, Kaye C; Widjaja, Wanty

    2008-01-01

    This paper reports on the effectiveness of an intervention designed to improve nursing students' conceptual understanding of decimal numbers. Results of recent intervention studies have indicated some success at improving nursing students' numeracy through practice in applying procedural rules for calculation and working in real or simulated practical contexts. However, in this we identified a fundamental problem: a significant minority of students had an inadequate understanding of decimal numbers. The intervention aimed to improve nursing students' basic understanding of the size of decimal numbers, so that, firstly, calculation rules are more meaningful, and secondly, students can interpret decimal numbers (whether digital output or results of calculations) sensibly. A well-researched, time-efficient diagnostic instrument was used to identify individuals with an inadequate understanding of decimal numbers. We describe a remedial intervention that resulted in significant improvement on a delayed post-intervention test. We conclude that nurse educators should consider diagnosing and, as necessary, plan for remediation of students' foundational understanding of decimal numbers before teaching procedural rules.

  2. Event Compression Using Recursive Least Squares Signal Processing.

    DTIC Science & Technology

    1980-07-01

    decimation of the Burstl signal with and without all-pole prefiltering to reduce aliasing . Figures 3.32a-c and 3.33a-c show the same examples but with 4/1...to reduce aliasing , w~t found that it did not improve the quality of the event compressed signals . If filtering must be performed, all-pole filtering...A-AO89 785 MASSACHUSETTS IN T OF TECH CAMBRIDGE RESEARCH LAB OF--ETC F/B 17/9 EVENT COMPRESSION USING RECURSIVE LEAST SQUARES SIGNAL PROCESSI-ETC(t

  3. Digital Signal Processing Techniques for the GIFTS SM EDU

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes several digital signal processing (DSP) techniques involved in the development of the calibration model. In the first stage, the measured raw interferograms must undergo a series of processing steps that include filtering, decimation, and detector nonlinearity correction. The digital filtering is achieved by employing a linear-phase even-length FIR complex filter that is designed based on the optimum equiripple criteria. Next, the detector nonlinearity effect is compensated for using a set of pre-determined detector response characteristics. In the next stage, a phase correction algorithm is applied to the decimated interferograms. This is accomplished by first estimating the phase function from the spectral phase response of the windowed interferogram, and then correcting the entire interferogram based on the estimated phase function. In the calibration stage, we first compute the spectral responsivity based on the previous results and the ideal Planck blackbody spectra at the given temperatures, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. In the post-calibration stage, we estimate the Noise Equivalent Spectral Radiance (NESR) from the calibrated ABB and HBB spectra. The NESR is generally considered as a measure of the instrument noise performance, and can be estimated as the standard deviation of calibrated radiance spectra from multiple scans. To obtain an estimate of the FPA performance, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is developed based on the pixel performance evaluation. This would allow us to perform the calibration procedures on a random pixel population that is a good statistical representation of the entire FPA. The design and implementation of each individual component will be discussed in details.

  4. Design of Cancelable Palmprint Templates Based on Look Up Table

    NASA Astrophysics Data System (ADS)

    Qiu, Jian; Li, Hengjian; Dong, Jiwen

    2018-03-01

    A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.

  5. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.

  6. Color Your Classroom V: A Math Guide on the Secondary Level.

    ERIC Educational Resources Information Center

    Mississippi Materials & Resource Center, Gulfport.

    This curriculum guide, designed for use with secondary migrant students, presents mathematics activities in the areas of whole numbers, fractions, decimals, percent, measurement, geometry, probability and statistics, and sets. Within the categories of whole numbers, fractions, and decimals are activities using addition, subtraction,…

  7. Psychology and Didactics of Mathematics in France--An Overview.

    ERIC Educational Resources Information Center

    Vergnaud, Gerard

    1983-01-01

    Examples are given of the variety of mathematical concepts and problems being studied by psychologically oriented researchers in France. Work on decimals, circles, natural numbers, decimal and real numbers, and didactic transposition are included. Comments on designing research on mathematics concept formation conclude the article. (MNS)

  8. Library Classification 2020

    ERIC Educational Resources Information Center

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  9. A Digital Radio Receiver for Ionospheric Research

    DTIC Science & Technology

    2006-06-01

    amplification, the signals are digitized and then processed by a digital down converter ( DDC ) and decimating low-pass filter. The resultant digital...images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 41 19a. NAME OF...the University of Calgary under a Contributions Agreement contract awarded by the Canadian Space Agency. The present paper follows an earlier article

  10. Analog/digital pH meter system I.C.

    NASA Technical Reports Server (NTRS)

    Vincent, Paul; Park, Jea

    1992-01-01

    The project utilizes design automation software tools to design, simulate, and fabricate a pH meter integrated circuit (IC) system including a successive approximation type seven-bit analog to digital converter circuits using a 1.25 micron N-Well CMOS MOSIS process. The input voltage ranges from 0.5 to 1.0 V derived from a special type pH sensor, and the output is a three-digit decimal number display of pH with one decimal point.

  11. Exploring the Acoustic Nonlinearity for Monitoring Complex Aerospace Structures

    DTIC Science & Technology

    2008-02-27

    nonlinear elastic waves, embedded ultrasonics, nonlinear diagnostics, aerospace structures, structural joints. 16. SECURITY CLASSIFICATION OF: 17...sampling, 100 MHz bandwidth with noise and anti- aliasing filters, general-purpose alias-protected decimation for all sample rates and quad digital down...conversion ( DDC ) with up to 40 MHz IF bandwidth. Specified resolution of NI PXI 5142 is 14-bits with the noise floor approaching -85 dB. Such a

  12. Math Academy: Dining Out! Explorations in Fractions, Decimals, & Percents. Book 4: Supplemental Math Materials for Grades 3-8

    ERIC Educational Resources Information Center

    Rimbey, Kimberly

    2007-01-01

    Created by teachers for teachers, the Math Academy tools and activities included in this booklet were designed to create hands-on activities and a fun learning environment for the teaching of mathematics to the students. This booklet contains the "Math Academy--Dining Out! Explorations in Fractions, Decimals, and Percents," which teachers can use…

  13. Single-Chip FPGA Azimuth Pre-Filter for SAR

    NASA Technical Reports Server (NTRS)

    Gudim, Mimi; Cheng, Tsan-Huei; Madsen, Soren; Johnson, Robert; Le, Charles T-C; Moghaddam, Mahta; Marina, Miguel

    2005-01-01

    A field-programmable gate array (FPGA) on a single lightweight, low-power integrated-circuit chip has been developed to implement an azimuth pre-filter (AzPF) for a synthetic-aperture radar (SAR) system. The AzPF is needed to enable more efficient use of data-transmission and data-processing resources: In broad terms, the AzPF reduces the volume of SAR data by effectively reducing the azimuth resolution, without loss of range resolution, during times when end users are willing to accept lower azimuth resolution as the price of rapid access to SAR imagery. The data-reduction factor is selectable at a decimation factor, M, of 2, 4, 8, 16, or 32 so that users can trade resolution against processing and transmission delays. In principle, azimuth filtering could be performed in the frequency domain by use of fast-Fourier-transform processors. However, in the AzPF, azimuth filtering is performed in the time domain by use of finite-impulse-response filters. The reason for choosing the time-domain approach over the frequency-domain approach is that the time-domain approach demands less memory and a lower memory-access rate. The AzPF operates on the raw digitized SAR data. The AzPF includes a digital in-phase/quadrature (I/Q) demodulator. In general, an I/Q demodulator effects a complex down-conversion of its input signal followed by low-pass filtering, which eliminates undesired sidebands. In the AzPF case, the I/Q demodulator takes offset video range echo data to the complex baseband domain, ensuring preservation of signal phase through the azimuth pre-filtering process. In general, in an SAR I/Q demodulator, the intermediate frequency (fI) is chosen to be a quarter of the range-sampling frequency and the pulse-repetition frequency (fPR) is chosen to be a multiple of fI. The AzPF also includes a polyphase spatial-domain pre-filter comprising four weighted integrate-and-dump filters with programmable decimation factors and overlapping phases. To prevent aliasing of signals, the bandwidth of the AzPF is made 80 percent of fPR/M. The choice of four as the number of overlapping phases is justified by prior research in which it was shown that a filter of length 4M can effect an acceptable transfer function. The figure depicts prototype hardware comprising the AzPF and ancillary electronic circuits. The hardware was found to satisfy performance requirements in real-time tests at a sampling rate of 100 MHz.

  14. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  15. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  16. Active feedforward noise control and signal tracking of headsets: Electroacoustic analysis and system implementation.

    PubMed

    Bai, Mingsian R; Pan, Weichi; Chen, Hungyu

    2018-03-01

    Active noise control (ANC) of headsets is revisited in this paper. An in-depth electroacoustic analysis of the combined loudspeaker-cavity headset system is conducted on the basis of electro-mechano-acoustical analogous circuits. Model matching of the primary path and the secondary path leads to a feedforward control architecture. The ideal controller sheds some light on the key parameters that affect the noise reduction performance. Filtered-X least-mean-squares algorithm is employed to implement the feedforward controller on a digital signal processor. Since the relative delay of the primary path and the secondary path is crucial to the noise reduction performance, multirate signal processing with polyphase implementation is utilized to minimize the effective analog-digital conversion delay in the secondary path. Ad hoc decimation and interpolation filters are designed in order not to introduce excessive phase delays at the cutoff. Real-time experiments are undertaken to validate the implemented ANC system. Listening tests are also conducted to compare the fixed controller and the adaptive controller in terms of noise reduction and signal tracking performance for three noise types. The results have demonstrated that the fixed feedforward controller achieved satisfactory noise reduction performance and signal tracking quality.

  17. Design, Simulation and Characteristics Research of the Interface Circuit based on nano-polysilicon thin films pressure sensor

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaosong; Zhao, Xiaofeng; Yin, Liang

    2018-03-01

    This paper presents a interface circuit for nano-polysilicon thin films pressure sensor. The interface circuit includes consist of instrument amplifier and Analog-to-Digital converter (ADC). The instrumentation amplifier with a high common mode rejection ratio (CMRR) is implemented by three stages current feedback structure. At the same time, in order to satisfy the high precision requirements of pressure sensor measure system, the 1/f noise corner of 26.5 mHz can be achieved through chopping technology at a noise density of 38.2 nV/sqrt(Hz).Ripple introduced by chopping technology adopt continuous ripple reduce circuit (RRL), which achieves the output ripple level is lower than noise. The ADC achieves 16 bits significant digit by adopting sigma-delta modulator with fourth-order single-bit structure and digital decimation filter, and finally achieves high precision integrated pressure sensor interface circuit.

  18. Development of a Real-Time Hardware-in-the-Loop Power Systems Simulation Platform to Evaluate Commercial Microgrid Controllers

    DTIC Science & Technology

    2016-02-19

    power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models...custom control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions...for the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux

  19. TR-1203: Development of a Real-Time Hardware-in-the-Loop Power Systems Simulation Platform to Evaluate Commercial Microgrid Controllers

    DTIC Science & Technology

    2016-02-23

    power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models or...control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions to...the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux point

  20. Independent Research and Independent Exploratory Development Programs: FY92 Annual Report

    DTIC Science & Technology

    1993-04-01

    transform- of an ERP provides a record of ERP energy at different times and scales. It does this by producing a set of filtered time series ai different...that the coefficients at any level are a series that measures energy within the bandwidth of that level as a function of time. For this reason it is...I to 25 Hz, and decimated to a final sampling rate of 50 Hz. The prestimulus baseline (200 ms) was adjusted to zero to remove any DC offset

  1. Information Management System for Electronic Voting In Support of the Schieffelin Award for Excellence in Teaching

    DTIC Science & Technology

    2001-09-01

    oldz3 decimal(5,3), @sel1 decimal(5,3), @sel2 decimal(5,3), @sel3 decimal(5,3), @ sel4 decimal(5,3), @sel5 decimal(5,3), @sel6 decimal(5,3...tnpSchieffelinHistory WHERE EmployeeID = @s4 and CalendarYear = @year SET @ sel4 = @z4formula + @oldsel4 UPDATE tnpSchieffelinHistory SET...SelectedOnBallotScore = @ sel4 WHERE EmployeeID = @s4 and CalendarYear = @year 345 END

  2. Flexible and unique representations of two-digit decimals.

    PubMed

    Zhang, Li; Chen, Min; Lin, Chongde; Szűcs, Denes

    2014-09-01

    We examined the representation of two-digit decimals through studying distance and compatibility effects in magnitude comparison tasks in four experiments. Using number pairs with different leftmost digits, we found both the second digit distance effect and compatibility effect with two-digit integers but only the second digit distance effect with two-digit pure decimals. This suggests that both integers and pure decimals are processed in a compositional manner. In contrast, neither the second digit distance effect nor the compatibility effect was observed in two-digit mixed decimals, thereby showing no evidence for compositional processing of two-digit mixed decimals. However, when the relevance of the rightmost digit processing was increased by adding some decimals pairs with the same leftmost digits, both pure and mixed decimals produced the compatibility effect. Overall, results suggest that the processing of decimals is flexible and depends on the relevance of unique digit positions. This processing mode is different from integer analysis in that two-digit mixed decimals demonstrate parallel compositional processing only when the rightmost digit is relevant. Findings suggest that people probably do not represent decimals by simply ignoring the decimal point and converting them to natural numbers. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Multi-Rate Acquisition for Dead Time Reduction in Magnetic Resonance Receivers: Application to Imaging With Zero Echo Time.

    PubMed

    Marjanovic, Josip; Weiger, Markus; Reber, Jonas; Brunner, David O; Dietrich, Benjamin E; Wilm, Bertram J; Froidevaux, Romain; Pruessmann, Klaas P

    2018-02-01

    For magnetic resonance imaging of tissues with very short transverse relaxation times, radio-frequency excitation must be immediately followed by data acquisition with fast spatial encoding. In zero-echo-time (ZTE) imaging, excitation is performed while the readout gradient is already on, causing data loss due to an initial dead time. One major dead time contribution is the settling time of the filters involved in signal down-conversion. In this paper, a multi-rate acquisition scheme is proposed to minimize dead time due to filtering. Short filters and high output bandwidth are used initially to minimize settling time. With increasing time since the signal onset, longer filters with better frequency selectivity enable stronger signal decimation. In this way, significant dead time reduction is accomplished at only a slight increase in the overall amount of output data. Multi-rate acquisition was implemented with a two-stage filter cascade in a digital receiver based on a field-programmable gate array. In ZTE imaging in a phantom and in vivo, dead time reduction by multi-rate acquisition is shown to improve image quality and expand the feasible bandwidth while increasing the amount of data collected by only a few percent.

  4. Digital Intermediate Frequency Receiver Module For Use In Airborne Sar Applications

    DOEpatents

    Tise, Bertice L.; Dubbert, Dale F.

    2005-03-08

    A digital IF receiver (DRX) module directly compatible with advanced radar systems such as synthetic aperture radar (SAR) systems. The DRX can combine a 1 G-Sample/sec 8-bit ADC with high-speed digital signal processor, such as high gate-count FPGA technology or ASICs to realize a wideband IF receiver. DSP operations implemented in the DRX can include quadrature demodulation and multi-rate, variable-bandwidth IF filtering. Pulse-to-pulse (Doppler domain) filtering can also be implemented in the form of a presummer (accumulator) and an azimuth prefilter. An out of band noise source can be employed to provide a dither signal to the ADC, and later be removed by digital signal processing. Both the range and Doppler domain filtering operations can be implemented using a unique pane architecture which allows on-the-fly selection of the filter decimation factor, and hence, the filter bandwidth. The DRX module can include a standard VME-64 interface for control, status, and programming. An interface can provide phase history data to the real-time image formation processors. A third front-panel data port (FPDP) interface can send wide bandwidth, raw phase histories to a real-time phase history recorder for ground processing.

  5. A Classification Methodology and Retrieval Model to Support Software Reuse

    DTIC Science & Technology

    1988-01-01

    Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333

  6. 5 Indicators of Decimal Understandings

    ERIC Educational Resources Information Center

    Cramer, Kathleen; Monson, Debra; Ahrendt, Sue; Colum, Karen; Wiley, Bethann; Wyberg, Terry

    2015-01-01

    The authors of this article collaborated with fourth-grade teachers from two schools to support implementation of a research-based fraction and decimal curriculum (Rational Number Project: Fraction Operations and Initial Decimal Ideas). Through this study, they identified five indicators of rich conceptual understanding of decimals, which are…

  7. Common magnitude representation of fractions and decimals is task dependent.

    PubMed

    Zhang, Li; Fang, Qiaochu; Gabriel, Florence C; Szűcs, Denes

    2016-01-01

    Although several studies have compared the representation of fractions and decimals, no study has investigated whether fractions and decimals, as two types of rational numbers, share a common representation of magnitude. The current study aimed to answer the question of whether fractions and decimals share a common representation of magnitude and whether the answer is influenced by task paradigms. We included two different number pairs, which were presented sequentially: fraction-decimal mixed pairs and decimal-fraction mixed pairs in all four experiments. Results showed that when the mixed pairs were very close numerically with the distance 0.1 or 0.3, there was a significant distance effect in the comparison task but not in the matching task. However, when the mixed pairs were further apart numerically with the distance 0.3 or 1.3, the distance effect appeared in the matching task regardless of the specific stimuli. We conclude that magnitudes of fractions and decimals can be represented in a common manner, but how they are represented is dependent on the given task. Fractions and decimals could be translated into a common representation of magnitude in the numerical comparison task. In the numerical matching task, fractions and decimals also shared a common representation. However, both of them were represented coarsely, leading to a weak distance effect. Specifically, fractions and decimals produced a significant distance effect only when the numerical distance was larger.

  8. Longer is Larger--Or is It?

    ERIC Educational Resources Information Center

    Roche, Anne

    2005-01-01

    The author cites research from students' misconceptions of decimal notation that indicates that many students treat decimals as another whole number to the right of the decimal point. This "whole number thinking" leads some students to believe, in the context of comparing decimals, that "longer is larger" (for example, 0.45 is larger than 0.8…

  9. National Elevation Dataset

    USGS Publications Warehouse

    ,

    2002-01-01

    The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.

  10. The Role of Domain-General Cognitive Abilities and Decimal Labels in At-Risk Fourth-Grade Students' Decimal Magnitude Understanding

    ERIC Educational Resources Information Center

    Malone, Amelia Schneider; Loehr, Abbey M.; Fuchs, Lynn S.

    2017-01-01

    The purpose of the study was to determine whether individual differences in at-risk 4th graders' language comprehension, nonverbal reasoning, concept formation, working memory, and use of decimal labels (i.e., place value, point, incorrect place value, incorrect fraction, or whole number) are related to their decimal magnitude understanding.…

  11. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  12. Parts and 'holes': gaps in rational number sense among children with vs. without mathematical learning disabilities.

    PubMed

    Mazzocco, Michèle M M; Devlin, Kathleen T

    2008-09-01

    Many middle-school students struggle with decimals and fractions, even if they do not have a mathematical learning disability (MLD). In the present longitudinal study, we examined whether children with MLD have weaker rational number knowledge than children whose difficulty with rational numbers occurs in the absence of MLD. We found that children with MLD failed to accurately name decimals, to correctly rank order decimals and/or fractions, and to identify equivalent ratios (e.g. 0.5 = 1/2); they also 'identified' incorrect equivalents (e.g. 0.05 = 0.50). Children with low math achievement but no MLD accurately named decimals and identified equivalent pairs, but failed to correctly rank order decimals and fractions. Thus failure to accurately name decimals was an indicator of MLD; but accurate naming was no guarantee of rational number knowledge - most children who failed to correctly rank order fractions and decimals tests passed the naming task. Most children who failed the ranking tests at 6th grade also failed at 8th grade. Our findings suggest that a simple task involving naming and rank ordering fractions and decimals may be a useful addition to in-class assessments used to determine children's learning of rational numbers.

  13. From rational numbers to algebra: separable contributions of decimal magnitude and relational understanding of fractions.

    PubMed

    DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J

    2015-05-01

    To understand the development of mathematical cognition and to improve instructional practices, it is critical to identify early predictors of difficulty in learning complex mathematical topics such as algebra. Recent work has shown that performance with fractions on a number line estimation task predicts algebra performance, whereas performance with whole numbers on similar estimation tasks does not. We sought to distinguish more specific precursors to algebra by measuring multiple aspects of knowledge about rational numbers. Because fractions are the first numbers that are relational expressions to which students are exposed, we investigated how understanding the relational bipartite format (a/b) of fractions might connect to later algebra performance. We presented middle school students with a battery of tests designed to measure relational understanding of fractions, procedural knowledge of fractions, and placement of fractions, decimals, and whole numbers onto number lines as well as algebra performance. Multiple regression analyses revealed that the best predictors of algebra performance were measures of relational fraction knowledge and ability to place decimals (not fractions or whole numbers) onto number lines. These findings suggest that at least two specific components of knowledge about rational numbers--relational understanding (best captured by fractions) and grasp of unidimensional magnitude (best captured by decimals)--can be linked to early success with algebraic expressions. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Coherent UDWDM PON with joint subcarrier reception at OLT.

    PubMed

    Kottke, Christoph; Fischer, Johannes Karl; Elschner, Robert; Frey, Felix; Hilt, Jonas; Schubert, Colja; Schmidt, Daniel; Wu, Zifeng; Lankl, Berthold

    2014-07-14

    In this contribution, we report on the experimental investigation of an ultra-dense wavelength-division multiplexing (UDWDM) upstream link with up to 700 × 2.488 Gb/s polarization-division multiplexing differential quadrature phase-shift keying parallel upstream user channels transmitted over 80 km of standard single-mode fiber. We discuss challenges of the digital signal processing in the optical line terminal arising from the joint reception of several upstream user channels. We present solutions for resource and cost-efficient realization of the required channel separation, matched filtering, down-conversion and decimation as well as realization of the clock recovery and polarization demultiplexing for each individual channel.

  15. An algorithm to compute the sequency ordered Walsh transform

    NASA Technical Reports Server (NTRS)

    Larsen, H.

    1976-01-01

    A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.

  16. Effect of the image resolution on the statistical descriptors of heterogeneous media.

    PubMed

    Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime

    2018-02-01

    The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.

  17. Effect of the image resolution on the statistical descriptors of heterogeneous media

    NASA Astrophysics Data System (ADS)

    Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime

    2018-02-01

    The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.

  18. Binary Number System Training for Graduate Foreign Students at New York Institute of Technology.

    ERIC Educational Resources Information Center

    Sudsataya, Nuntawun

    This thesis describes the design, development, implementation, and evaluation of a training module to instruct graduate foreign students to learn the representation of the binary system and the method of decimal-binary conversion. The designer selected programmed instruction as the method of instruction and used the "lean" approach to…

  19. The Story of PI

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1989-01-01

    The early history and the uses of the mathematical notation - pi - are presented through both film footage and computer animation in this 'Project Mathematics' series video. Pi comes from the first letter in the Greek word for perimeter. Archimedes, and early Greek mathematician, formulated the equations for the computation of a circle's area using pi and was the first person to seriously approximate pi numerically, although only to a few decimal places. By 1985, pi had been approximated to over one billion decimal places and was found to have no repeating pattern. One use of pi is the application of its approximation calculation as an analytical tool for determining the accuracy of supercomputers and software designs.

  20. Double-S Decimals, Mathematics: 5211.20.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL.

    The last of four guidebooks in the sequence, this booklet uses UICSM's "stretcher and shrinker" approach in developing place value, and four operations with decimals, conversion between fractions and decimals, and applications to measurement and rate problems. Overall goals, performance objectives for the course, teaching suggestions,…

  1. INSPECTION MEANS FOR INDUCTION MOTORS

    DOEpatents

    Williams, A.W.

    1959-03-10

    an appartus is descripbe for inspcting electric motors and more expecially an appartus for detecting falty end rings inn suqirrel cage inductio motors while the motor is running. In its broua aspects, the mer would around ce of reference tedtor means also itons in the phase ition of the An electronic circuit for conversion of excess-3 binary coded serial decimal numbers to straight binary coded serial decimal numbers is reported. The converter of the invention in its basic form generally coded pulse words of a type having an algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance preceding a y algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance. A switching martix is coupled to said input circuit and is internally connected to produce serial straight binary coded pulse groups indicative of the excess-3 coded input. A stepping circuit is coupled to the switching matrix and to a synchronous counter having a plurality of x decimal digit and plurality of y decimal digit indicator terminals. The stepping circuit steps the counter in synchornism with the serial binary pulse group output from the switching matrix to successively produce pulses at corresponding ones of the x and y decimal digit indicator terminals. The combinations of straight binary coded pulse groups and corresponding decimal digit indicator signals so produced comprise a basic output suitable for application to a variety of output apparatus.

  2. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, L.; Wang, G.; Wessel, P.

    2017-12-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3cm×3cm) to handprint (e.g., 10cm×10cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain portable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem and implement an anti-aliasing procedure of regridding dense TLS data. The TLS data collected in the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as anti-aliasing filters. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  3. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, Lin.; Wang, Guoquan; Wessel, Paul

    2017-03-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3 cm×3 cm) to handprint (e.g., 10 cm×10 cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain manageable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing caused by downsampling have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem of regridding dense TLS data. The TLS data collected from the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as an anti-aliasing filter. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with two different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  4. Nifty Nines and Repeating Decimals

    ERIC Educational Resources Information Center

    Brown, Scott A.

    2016-01-01

    The traditional technique for converting repeating decimals to common fractions can be found in nearly every algebra textbook that has been published, as well as in many precalculus texts. However, students generally encounter repeating decimal numerals earlier than high school when they study rational numbers in prealgebra classes. Therefore, how…

  5. Repeating Decimals: An Alternative Teaching Approach

    ERIC Educational Resources Information Center

    Appova, Aina K.

    2017-01-01

    To help middle school students make better sense of decimals and fraction, the author and an eighth-grade math teacher worked on a 90-minute lesson that focused on representing repeating decimals as fractions. They embedded experimentations and explorations using technology and calculators to help promote students' intuitive and conceptual…

  6. The Universal Decimal Classification: Some Factors Concerning Its Origins, Development, and Influence.

    ERIC Educational Resources Information Center

    McIlwaine, I. C.

    1997-01-01

    Discusses the history and development of the Universal Decimal Classification (UDC). Topics include the relationship with Dewey Decimal Classification; revision process; structure; facet analysis; lack of standard rules for application; application in automated systems; influence of UDC on classification development; links with thesauri; and use…

  7. Inference of the sparse kinetic Ising model using the decimation method

    NASA Astrophysics Data System (ADS)

    Decelle, Aurélien; Zhang, Pan

    2015-05-01

    In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.

  8. Rapid Decimation for Direct Volume Rendering

    NASA Technical Reports Server (NTRS)

    Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane

    1997-01-01

    An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.

  9. Reconfigurable radio receiver with fractional sample rate converter and multi-rate ADC based on LO-derived sampling clock

    NASA Astrophysics Data System (ADS)

    Park, Sungkyung; Park, Chester Sungchung

    2018-03-01

    A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.

  10. 75 FR 54902 - Statutorily Mandated Designation of Difficult Development Areas and Qualified Census Tracts for 2011

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-09

    ... ranking ratio, as described above, was identical (to four decimal places) to the last area selected, and... FURTHER INFORMATION CONTACT: For questions on how areas are designated and on geographic definitions... metropolitan area definitions incorporating 2000 Census data in OMB Bulletin No. 03- 04 on June 6, 2003, and...

  11. Acquisition and analysis of accelerometer data

    NASA Astrophysics Data System (ADS)

    Verges, Keith R.

    1990-08-01

    Acceleration data reduction must be undertaken with a complete understanding of the physical process, the means by which the data are acquired, and finally, the calculations necessary to put the data into a meaningful format. Discussed here are the acceleration sensor requirements dictated by the measurements desired. Sensor noise, dynamic range, and linearity will be determined from the physical parameters of the experiment. The digitizer requirements are discussed. Here the system from sensor to digital storage medium will be integrated, and rules of thumb for experiment duration, filter response, and number of bits are explained. Data reduction techniques after storage are also discussed. Time domain operations including decimating, digital filtering, and averaging are covered, as well as frequency domain methods, including windowing and the difference between power and amplitude spectra, and simple noise determination via coherence analysis. Finally, an example experiment using the Teledyne Geotech Model 44000 Seismometer to measure from 1 Hz to 10(exp -6) Hz is discussed. The sensor, data acquisition system, and example spectra are presented.

  12. Acquisition and analysis of accelerometer data

    NASA Technical Reports Server (NTRS)

    Verges, Keith R.

    1990-01-01

    Acceleration data reduction must be undertaken with a complete understanding of the physical process, the means by which the data are acquired, and finally, the calculations necessary to put the data into a meaningful format. Discussed here are the acceleration sensor requirements dictated by the measurements desired. Sensor noise, dynamic range, and linearity will be determined from the physical parameters of the experiment. The digitizer requirements are discussed. Here the system from sensor to digital storage medium will be integrated, and rules of thumb for experiment duration, filter response, and number of bits are explained. Data reduction techniques after storage are also discussed. Time domain operations including decimating, digital filtering, and averaging are covered, as well as frequency domain methods, including windowing and the difference between power and amplitude spectra, and simple noise determination via coherence analysis. Finally, an example experiment using the Teledyne Geotech Model 44000 Seismometer to measure from 1 Hz to 10(exp -6) Hz is discussed. The sensor, data acquisition system, and example spectra are presented.

  13. 47 CFR 32.20 - Numbering convention.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... following to the right of the decimal point indicate, respectively, the section or account. All Part 32 Account numbers contain 4 digits to-the-right-of the decimal point. (c) Cross references to accounts are made by citing the account numbers to the right of the decimal point; e.g., Account 2232 rather than...

  14. A Comparison between the Decimated Padé Approximant and Decimated Signal Diagonalization Methods for Leak Detection in Pipelines Equipped with Pressure Sensors.

    PubMed

    Lay-Ekuakille, Aimé; Fabbiano, Laura; Vacca, Gaetano; Kitoko, Joël Kidiamboko; Kulapa, Patrice Bibala; Telesca, Vito

    2018-06-04

    Pipelines conveying fluids are considered strategic infrastructures to be protected and maintained. They generally serve for transportation of important fluids such as drinkable water, waste water, oil, gas, chemicals, etc. Monitoring and continuous testing, especially on-line, are necessary to assess the condition of pipelines. The paper presents findings related to a comparison between two spectral response algorithms based on the decimated signal diagonalization (DSD) and decimated Padé approximant (DPA) techniques that allow to one to process signals delivered by pressure sensors mounted on an experimental pipeline.

  15. Fast Offset Laser Phase-Locking System

    NASA Technical Reports Server (NTRS)

    Shaddock, Daniel; Ware, Brent

    2008-01-01

    Figure 1 shows a simplified block diagram of an improved optoelectronic system for locking the phase of one laser to that of another laser with an adjustable offset frequency specified by the user. In comparison with prior systems, this system exhibits higher performance (including higher stability) and is much easier to use. The system is based on a field-programmable gate array (FPGA) and operates almost entirely digitally; hence, it is easily adaptable to many different systems. The system achieves phase stability of less than a microcycle. It was developed to satisfy the phase-stability requirement for a planned spaceborne gravitational-wave-detecting heterodyne laser interferometer (LISA). The system has potential terrestrial utility in communications, lidar, and other applications. The present system includes a fast phasemeter that is a companion to the microcycle-accurate one described in High-Accuracy, High-Dynamic-Range Phase-Measurement System (NPO-41927), NASA Tech Briefs, Vol. 31, No. 6 (June 2007), page 22. In the present system (as in the previously reported one), beams from the two lasers (here denoted the master and slave lasers) interfere on a photodiode. The heterodyne photodiode output is digitized and fed to the fast phasemeter, which produces suitably conditioned, low-latency analog control signals which lock the phase of the slave laser to that of the master laser. These control signals are used to drive a thermal and a piezoelectric transducer that adjust the frequency and phase of the slave-laser output. The output of the photodiode is a heterodyne signal at the difference between the frequencies of the two lasers. (The difference is currently required to be less than 20 MHz due to the Nyquist limit of the current sampling rate. We foresee few problems in doubling this limit using current equipment.) Within the phasemeter, the photodiode-output signal is digitized to 15 bits at a sampling frequency of 40 MHz by use of the same analog-to-digital converter (ADC) as that of the previously reported phasemeter. The ADC output is passed to the FPGA, wherein the signal is demodulated using a digitally generated oscillator signal at the offset locking frequency specified by the user. The demodulated signal is low-pass filtered, decimated to a sample rate of 1 MHz, then filtered again. The decimated and filtered signal is converted to an analog output by a 1 MHz, 16-bit digital-to-analog converters. After a simple low-pass filter, these analog signals drive the thermal and piezoelectric transducers of the laser.

  16. 76 FR 50276 - Self-Regulatory Organizations; OneChicago, LLC; Notice of Filing and Immediate Effectiveness of a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... for Four Decimal Point Pricing for Block and Exchange for Physical (``EFPs'') Trades August 8, 2011... block trades and the futures component of EFP trades to be traded/priced in four decimals points. Regular trades (non-block or non EFP) will continue to trade in only two decimal points. The text of the...

  17. Hong Kong Grade 6 Students' Performance and Mathematical Reasoning in Decimals Tasks: Procedurally Based or Conceptually Based?

    ERIC Educational Resources Information Center

    Lai, Mun Yee; Murray, Sara

    2015-01-01

    Most studies of students' understanding of decimals have been conducted within Western cultural settings. The broad aim of the present research was to gain insight into Chinese Hong Kong grade 6 students' general performance on a variety of decimals tasks. More specifically, the study aimed to explore students' mathematical reasoning for their use…

  18. Technical Mathematics: Restructure of Technical Mathematics.

    ERIC Educational Resources Information Center

    Flannery, Carol A.

    Designed to accompany a series of videotapes, this textbook provides information, examples, problems, and solutions relating to mathematics and its applications in technical fields. Chapter I deals with basic arithmetic, providing information on fractions, decimals, ratios, proportions, percentages, and order of operations. Chapter II focuses on…

  19. Miniature L-Band Radar Transceiver

    NASA Technical Reports Server (NTRS)

    McWatters, Dalia; Price, Douglas; Edelstein, Wendy

    2007-01-01

    A miniature L-band transceiver that operates at a carrier frequency of 1.25 GHz has been developed as part of a generic radar electronics module (REM) that would constitute one unit in an array of many identical units in a very-large-aperture phased-array antenna. NASA and the Department of Defense are considering the deployment of such antennas in outer space; the underlying principles of operation, and some of those of design, also are applicable on Earth. The large dimensions of the antennas make it advantageous to distribute radio-frequency electronic circuitry into elements of the arrays. The design of the REM is intended to implement the distribution. The design also reflects a requirement to minimize the size and weight of the circuitry in order to minimize the weight of any such antenna. Other requirements include making the transceiver robust and radiation-hard and minimizing power demand. Figure 1 depicts the functional blocks of the REM, including the L-band transceiver. The key functions of the REM include signal generation, frequency translation, amplification, detection, handling of data, and radar control and timing. An arbitrary-waveform generator that includes logic circuitry and a digital-to-analog converter (DAC) generates a linear-frequency-modulation chirp waveform. A frequency synthesizer produces local-oscillator signals used for frequency conversion and clock signals for the arbitrary-waveform generator, for a digitizer [that is, an analog-to-digital converter (ADC)], and for a control and timing unit. Digital functions include command, timing, telemetry, filtering, and high-rate framing and serialization of data for a high-speed scientific-data interface. The aforementioned digital implementation of filtering is a key feature of the REM architecture. Digital filters, in contradistinction to analog ones, provide consistent and temperature-independent performance, which is particularly important when REMs are distributed throughout a large array. Digital filtering also enables selection among multiple filter parameters as required for different radar operating modes. After digital filtering, data are decimated appropriately in order to minimize the data rate out of an antenna panel. The L-band transceiver (see Figure 2) includes a radio-frequency (RF)-to-baseband down-converter chain and an intermediate- frequency (IF)-to-RF up-converter chain. Transmit/receive (T/R) switches enable the use of a single feed to the antenna for both transmission and reception. The T/R switches also afford a built-in test capability by enabling injection of a calibration signal into the receiver chain. In order of decreasing priority, components of the transceiver were selected according to requirements of radiation hardness, then compactness, then low power. All of the RF components are radiation-hard. The noise figure (NF) was optimized to the extent that (1) a low-noise amplifier (LNA) (characterized by NF < 2 dB) was selected but (2) the receiver front-end T/R switches were selected for a high degree of isolation and acceptably low loss, regardless of the requirement to minimize noise.

  20. Pre-Service Middle School Mathematics Teachers' Understanding of Students' Knowledge: Location of Decimal Numbers on a Number Line

    ERIC Educational Resources Information Center

    Girit, Dilek; Akyuz, Didem

    2016-01-01

    Studies reveal that students as well as teachers have difficulties in understanding and learning of decimals. The purpose of this study is to investigate students' as well as pre-service teachers' solution strategies when solving a question that involves an estimation task for the value of a decimal number on the number line. We also examined the…

  1. Representational Flexibility and Problem-Solving Ability in Fraction and Decimal Number Addition: A Structural Model

    ERIC Educational Resources Information Center

    Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti

    2016-01-01

    The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…

  2. Advanced electronics for the CTF MEG system.

    PubMed

    McCubbin, J; Vrba, J; Spear, P; McKenzie, D; Willis, R; Loewen, R; Robinson, S E; Fife, A A

    2004-11-30

    Development of the CTF MEG system has been advanced with the introduction of a computer processing cluster between the data acquisition electronics and the host computer. The advent of fast processors, memory, and network interfaces has made this innovation feasible for large data streams at high sampling rates. We have implemented tasks including anti-alias filter, sample rate decimation, higher gradient balancing, crosstalk correction, and optional filters with a cluster consisting of 4 dual Intel Xeon processors operating on up to 275 channel MEG systems at 12 kHz sample rate. The architecture is expandable with additional processors to implement advanced processing tasks which may include e.g., continuous head localization/motion correction, optional display filters, coherence calculations, or real time synthetic channels (via beamformer). We also describe an electronics configuration upgrade to provide operator console access to the peripheral interface features such as analog signal and trigger I/O. This allows remote location of the acoustically noisy electronics cabinet and fitting of the cabinet with doors for improved EMI shielding. Finally, we present the latest performance results available for the CTF 275 channel MEG system including an unshielded SEF (median nerve electrical stimulation) measurement enhanced by application of an adaptive beamformer technique (SAM) which allows recognition of the nominal 20-ms response in the unaveraged signal.

  3. The risks of innovation in health care.

    PubMed

    Enzmann, Dieter R

    2015-04-01

    Innovation in health care creates risks that are unevenly distributed. An evolutionary analogy using species to represent business models helps categorize innovation experiments and their risks. This classification reveals two qualitative categories: early and late diversification experiments. Early diversification has prolific innovations with high risk because they encounter a "decimation" stage, during which most experiments disappear. Participants face high risk. The few decimation survivors can be sustaining or disruptive according to Christensen's criteria. Survivors enter late diversification, during which they again expand, but within a design range limited to variations of the previous surviving designs. Late diversifications carry lower risk. The exception is when disruptive survivors "diversify," which amplifies their disruption. Health care and radiology will experience both early and late diversifications, often simultaneously. Although oversimplifying Christensen's concepts, early diversifications are likely to deliver disruptive innovation, whereas late diversifications tend to produce sustaining innovations. Current health care consolidation is a manifestation of late diversification. Early diversifications will appear outside traditional care models and physical health care sites, as well as with new science such as molecular diagnostics. They warrant attention because decimation survivors will present both disruptive and sustaining opportunities to radiology. Radiology must participate in late diversification by incorporating sustaining innovations to its value chain. Given the likelihood of disruptive survivors, radiology should seriously consider disrupting itself rather than waiting for others to do so. Disruption entails significant modifications of its value chain, hence, its business model, for which lessons may become available from the pharmaceutical industry's current simultaneous experience with early and late diversifications. Copyright © 2015. Published by Elsevier Inc.

  4. Decimals, Denominators, Demons, Calculators, and Connections

    ERIC Educational Resources Information Center

    Sparrow, Len; Swan, Paul

    2005-01-01

    The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…

  5. Developing Basic Math Skills for Marketing. Student Manual and Laboratory Guide.

    ERIC Educational Resources Information Center

    Klewer, Edwin D.

    Field tested with students in grades 10-12, this manual is designed to teach students in marketing courses basic mathematical concepts. The instructional booklet contains seven student assignments covering the following topics: why basic mathematics is so important, whole numbers, fractions, decimals, percentages, weights and measures, and dollars…

  6. Mathematics for the Baker.

    ERIC Educational Resources Information Center

    Bogdany, Melvin

    The curriculum guide offers a course of training in the fundamentals of mathematics as applied to baking. Problems specifically related to the baking trade are included to maintain a practical orientation. The course is designed to help the student develop proficiency in the basic computation of whole numbers, fractions, decimals, percentage,…

  7. Competency-Based Business Math. Curriculum Guide. Bulletin No. 1814.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge. Div. of Vocational Education.

    This is a curriculum guide for a course designed to enable students to master the necessary basic mathematics and business-related mathematics skills needed for entry into office and business occupations. The guide includes 11 instructional units: (1) "Fundamental Math Skills"; (2) "Fractions"; (3) "Decimals"; (4)…

  8. Browsing Your Virtual Library: The Case of Expanding Universe.

    ERIC Educational Resources Information Center

    Daniels, Wayne; Enright, Jeanne; Mackenzie, Scott

    1997-01-01

    Describes "Expanding Universe: a classified search tool for amateur astronomy," a Web site maintained by the Metropolitan Toronto Reference Library which uses a modified form of the Dewey Decimal Classification to organize a large file of astronomy hotlinks. Highlights include structure, HTML coding, design requirements, and future…

  9. Grade 8 Spanish Math Skills Sharpeners and La Calculadora. Hojas de ejercicios (Calculator Unit. Exercise Sheets.)

    ERIC Educational Resources Information Center

    Milwaukee Public Schools, WI.

    This workbook contains "skill sharpening" math problems presented in Spanish. These problems have been designed as supplementary work for students at the eighth grade level. Functions and topics such as addition, subtraction, division, multiplication, decimals, scientific notation (exponents), fractions, symmetry, angles, the metric…

  10. Joint Improvised-Threat Defeat Agency Needs to Improve Assessment and Documentation of Counter-Improvised Explosive Device Initiatives (Redacted)

    DTIC Science & Technology

    2016-08-09

    2 Slight rounding inconsistencies exist because auditor calculations included decimals. (FOUO) Table 2. (FOUO) No. Initiative Title1 FY Action...rounding inconsistencies exist because auditor calculations included decimals. Acronyms: USAF United States Air Force USMC United States Marine Corps...exist because auditor calculations included decimals. (FOUO) Table 3. FOR OFFICIAL USE ONLY FOR OFFICIAL USE ONLY Management Comments 30 │ DODIG-2016

  11. 27 CFR 30.63 - Table 3, for determining the number of proof gallons from the weight and proof of spirituous liquor.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...

  12. 27 CFR 30.63 - Table 3, for determining the number of proof gallons from the weight and proof of spirituous liquor.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...

  13. 27 CFR 30.63 - Table 3, for determining the number of proof gallons from the weight and proof of spirituous liquor.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...

  14. 27 CFR 30.63 - Table 3, for determining the number of proof gallons from the weight and proof of spirituous liquor.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...

  15. 27 CFR 30.63 - Table 3, for determining the number of proof gallons from the weight and proof of spirituous liquor.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...

  16. Advanced Data Acquisition System Implementation for the ITER Neutron Diagnostic Use Case Using EPICS and FlexRIO Technology on a PXIe Platform

    NASA Astrophysics Data System (ADS)

    Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.

    2016-04-01

    To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.

  17. Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network

    NASA Astrophysics Data System (ADS)

    Nasution, T. H.; Andayani, U.

    2017-03-01

    The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.

  18. An ERP Study of the Processing of Common and Decimal Fractions: How Different They Are

    PubMed Central

    Zhang, Li; Wang, Qi; Lin, Chongde; Ding, Cody; Zhou, Xinlin

    2013-01-01

    This study explored event-related potential (ERP) correlates of common fractions (1/5) and decimal fractions (0.2). Thirteen subjects performed a numerical magnitude matching task under two conditions. In the common fraction condition, a nonsymbolic fraction was asked to be judged whether its magnitude matched the magnitude of a common fraction; in the decimal fraction condition, a nonsymbolic fraction was asked to be matched with a decimal fraction. Behavioral results showed significant main effects of condition and numerical distance, but no significant interaction of condition and numerical distance. Electrophysiological data showed that when nonsymbolic fractions were compared to common fractions, they displayed larger N1 and P3 amplitudes than when they were compared to decimal fractions. This finding suggested that the visual identification for nonsymbolic fractions was different under the two conditions, which was not due to perceptual differences but to task demands. For symbolic fractions, the condition effect was observed in the N1 and P3 components, revealing stimulus-specific visual identification processing. The effect of numerical distance as an index of numerical magnitude representation was observed in the P2, N3 and P3 components under the two conditions. However, the topography of the distance effect was different under the two conditions, suggesting stimulus specific semantic processing of common fractions and decimal fractions. PMID:23894491

  19. Primary teachers' subject matter knowledge: decimals

    NASA Astrophysics Data System (ADS)

    Ubuz, Behiye; Yayan, Betül

    2010-09-01

    The main objective of this study was to investigate primary teachers' subject matter knowledge in the domain of decimals and more elaborately to investigate their performance and difficulties in reading scale, ordering numbers, finding the nearest decimal and doing operations, such as addition and subtraction. The difficulties in these particular areas are analysed and suggestions are made regarding their causes. Further, factors that influence this knowledge were explored. The sample of the study was 63 primary teachers. A decimal concepts test including 18 tasks was administered and the total scores for the 63 primary teachers ranged from 3 to 18 with a mean and median of 12. Fifty per cent of the teachers were above the mean score. The detailed investigation of the responses revealed that the primary teachers faced similar difficulties that students and pre-service teachers faced. Discrepancy on teachers' knowledge revealed important differences based on educational level attained, but not the number of years of teaching experience and experience in teaching decimals. Some suggestions have been made regarding the implications for pre- and in-service teacher training.

  20. The 4M compaNy: Make Mine Metric Mystery. Fifth Grade Student Booklet.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu.

    This student activity manual for elementary students is designed to teach several concepts related to the metric system and measurement. Included are activities related to length, area, volume, conversion of metric units, and computation skills with decimals (addition, subtraction, and division). Cartoons are used extensively to appeal to student…

  1. Basic Mathematics Machine Calculator Course.

    ERIC Educational Resources Information Center

    Windsor Public Schools, CT.

    This series of four text-workbooks was designed for tenth grade mathematics students who have exhibited lack of problem-solving skills. Electric desk calculators are to be used with the text. In the first five chapters of the series, students learn how to use the machine while reviewing basic operations with whole numbers, decimals, fractions, and…

  2. Fundamentals of Digital Logic, 7-1. Military Curriculum Materials for Vocational and Technical Education.

    ERIC Educational Resources Information Center

    Marine Corps, Washington, DC.

    Targeted for grades 10 through adult, these military-developed curriculum materials consist of a student lesson book with text readings and review exercises designed to prepare electronic personnel for further training in digital techniques. Covered in the five lessons are binary arithmetic (number systems, decimal systems, the mathematical form…

  3. Growth and management of a remnant stand of Engelmann oak at Los Angeles County Arboretum & Botanic Garden

    Treesearch

    James Henrich

    2015-01-01

    Commercial, residential and ranch development combined with pressures from grazing, foraging and pests have decimated Engelmann oak (Quercus engelmannii) populations, resulting in this species being designated as vulnerable by the International Union for Conservation of Nature Red List of Threatened SpeciesTM. Los Angeles...

  4. Design of an optical 4-bit binary to BCD converter using electro-optic effect of lithium niobate based Mach-Zehnder interferometers

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2017-07-01

    Binary to Binary coded decimal (BCD) converter is a basic building block for BCD processing. The last few decades have witnessed exponential rise in applications of binary coded data processing in the field of optical computing thus there is an eventual increase in demand of acceptable hardware platform for the same. Keeping this as an approach a novel design exploiting the preeminent feature of Mach-Zehnder Interferometer (MZI) is presented in this paper. Here, an optical 4-bit binary to binary coded decimal (BCD) converter utilizing the electro-optic effect of lithium niobate based MZI has been demonstrated. It exhibits the property of switching the optical signal from one port to the other, when a certain appropriate voltage is applied to its electrodes. The projected scheme is implemented using the combinations of cascaded electro-optic (EO) switches. Theoretical description along with mathematical formulation of the device is provided and the operation is analyzed through finite difference-Beam propagation method (FD-BPM). The fabrication techniques to develop the device are also discussed.

  5. Non-Maximally Decimated Filter Banks Enable Adaptive Frequency Hopping for Unmanned Aircraft Vehicles

    NASA Technical Reports Server (NTRS)

    Venosa, Elettra; Vermeire, Bert; Alakija, Cameron; Harris, Fred; Strobel, David; Sheehe, Charles J.; Krunz, Marwan

    2017-01-01

    In the last few years, radio technologies for unmanned aircraft vehicle (UAV) have advanced very rapidly. The increasing need to fly unmanned aircraft systems (UAS) in the national airspace system (NAS) to perform missions of vital importance to national security, defense, and science has pushed ahead the design and implementation of new radio platforms. However, a lot still has to be done to improve those radios in terms of performance and capabilities. In addition, an important aspect to account for is hardware cost and the feasibility to implement these radios using commercial off-the-shelf (COTS) components. UAV radios come with numerous technical challenges and their development involves contributions at different levels of the design. Cognitive algorithms need to be developed in order to perform agile communications using appropriate frequency allocation while maintaining safe and efficient operations in the NAS and, digital reconfigurable architectures have to be designed in order to ensure a prompt response to environmental changes. Command and control (C2) communications have to be preserved during "standard" operations while crew operations have to be minimized. It is clear that UAV radios have to be software-defined systems, where size, weight and power consumption (SWaP) are critical parameters. This paper provides preliminary results of the efforts performed to design a fully digital radio architecture as part of a NASA Phase I STTR. In this paper, we will explain the basic idea and technical principles behind our dynamic/adaptive frequency hopping radio for UAVs. We will present our Simulink model of the dynamic FH radio transmitter design for UAV communications and show simulation results and FPGA system analysis.

  6. Children's understanding of fraction and decimal symbols and the notation-specific relation to pre-algebra ability.

    PubMed

    Hurst, Michelle A; Cordes, Sara

    2018-04-01

    Fraction and decimal concepts are notoriously difficult for children to learn yet are a major component of elementary and middle school math curriculum and an important prerequisite for higher order mathematics (i.e., algebra). Thus, recently there has been a push to understand how children think about rational number magnitudes in order to understand how to promote rational number understanding. However, prior work investigating these questions has focused almost exclusively on fraction notation, overlooking the open questions of how children integrate rational number magnitudes presented in distinct notations (i.e., fractions, decimals, and whole numbers) and whether understanding of these distinct notations may independently contribute to pre-algebra ability. In the current study, we investigated rational number magnitude and arithmetic performance in both fraction and decimal notation in fourth- to seventh-grade children. We then explored how these measures of rational number ability predicted pre-algebra ability. Results reveal that children do represent the magnitudes of fractions and decimals as falling within a single numerical continuum and that, despite greater experience with fraction notation, children are more accurate when processing decimal notation than when processing fraction notation. Regression analyses revealed that both magnitude and arithmetic performance predicted pre-algebra ability, but magnitude understanding may be particularly unique and depend on notation. The educational implications of differences between children in the current study and previous work with adults are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  8. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  9. Landsat thematic mapper attitude data processing

    NASA Technical Reports Server (NTRS)

    Sehn, G. J.; Miller, S. F.

    1984-01-01

    The Landsat 4 and 5 satellites carry a new, high resolution, seven band thematic mapper imaging instrument. The spacecraft also carry two types of attitude sensors: a gyroscopic internal reference unit (IRU) which senses angular rate from dc to about 2 Hz, and an AC-coupled angular displacement sensor (ADS) measuring angular deviation above 2 Hz. A description of the derivation of the crossover network used to combine and equalize the IRU and ADS data is made. Also described are the digital data processing algorithms which produce the time history of the satellites' attitude motion including the finite impulse response (FIR) implementation of G and F filters; the resampling (interpolation/decimation) and synchronization of the IRU and ADS data; and the axis rotations required as a result of the on-board sensor locations on three orthogonal axes.

  10. Neural representations of magnitude for natural and rational numbers.

    PubMed

    DeWolf, Melissa; Chiang, Jeffrey N; Bassok, Miriam; Holyoak, Keith J; Monti, Martin M

    2016-11-01

    Humans have developed multiple symbolic representations for numbers, including natural numbers (positive integers) as well as rational numbers (both fractions and decimals). Despite a considerable body of behavioral and neuroimaging research, it is currently unknown whether different notations map onto a single, fully abstract, magnitude code, or whether separate representations exist for specific number types (e.g., natural versus rational) or number representations (e.g., base-10 versus fractions). We address this question by comparing brain metabolic response during a magnitude comparison task involving (on different trials) integers, decimals, and fractions. Univariate and multivariate analyses revealed that the strength and pattern of activation for fractions differed systematically, within the intraparietal sulcus, from that of both decimals and integers, while the latter two number representations appeared virtually indistinguishable. These results demonstrate that the two major notations formats for rational numbers, fractions and decimals, evoke distinct neural representations of magnitude, with decimals representations being more closely linked to those of integers than to those of magnitude-equivalent fractions. Our findings thus suggest that number representation (base-10 versus fractions) is an important organizational principle for the neural substrate underlying mathematical cognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Efficacy of a Low Dose of Hydrogen Peroxide (Peroxy Ag⁺) for Continuous Treatment of Dental Unit Water Lines: Challenge Test with Legionella pneumophila Serogroup 1 in a Simulated Dental Unit Waterline.

    PubMed

    Ditommaso, Savina; Giacomuzzi, Monica; Ricciardi, Elisa; Zotti, Carla M

    2016-07-22

    This study was designed to examine the in vitro bactericidal activity of hydrogen peroxide against Legionella. We tested hydrogen peroxide (Peroxy Ag⁺) at 600 ppm to evaluate Legionella survival in a simulated dental treatment water system equipped with Water Hygienization Equipment (W.H.E.) device that was artificially contaminated. When Legionella pneumophila serogroup (sg) 1 was exposed to Peroxy Ag⁺ for 60 min we obtained a two decimal log reduction. High antimicrobial efficacy was obtained with extended periods of exposure: four decimal log reduction at 75 min and five decimal log reduction at 15 h of exposure. Involving a simulation device (Peroxy Ag⁺ is flushed into the simulation dental unit waterlines (DUWL)) we obtained an average reduction of 85% of Legionella load. The product is effective in reducing the number of Legionella cells after 75 min of contact time (99.997%) in the simulator device under test conditions. The Peroxy Ag⁺ treatment is safe for continuous use in the dental water supply system (i.e., it is safe for patient contact), so it could be used as a preventive option, and it may be useful in long-term treatments, alone or coupled with a daily or periodic shock treatment.

  12. Macrootlocus, a CAD Design Tool for Feedback Control Systems

    DTIC Science & Technology

    1989-12-01

    var Form :DecForm; Str :DecStr; begin 125 Form.Style, :. FixedDecimal; Forrr.Digits :. 0; Num2Str(Form, 1, Str ); lntToStr :. Str ; end; { lntToStr...School Monterey, CA 93943 15. Hong-on Kim SMC2665 Naval Postgraduate School Monterey, CA 93943 16. Kang, Mung Hung SMC1375 Naval Postgraduate School

  13. Being Black in America, K-12. A Multimedia Listing of the 70's.

    ERIC Educational Resources Information Center

    Dean, Frances C., Comp.

    This catalog lists over 600 sources, including books, records, kits, and filmstrips covering both black American and African history, folklore, literature, and present day life. It is designed to assist personnel in the selection of media for schools. The contents are organized according to the Dewey Decimal Classification System: Generalities;…

  14. Use of the Dewey Decimal Classification in the United States and Canada.

    ERIC Educational Resources Information Center

    Comaromi, John P.

    1978-01-01

    A summary of use of DDC in U.S. and Canadian libraries shows that 85 percent of all libraries use DDC; of these, 75 percent use the most recent full or abridged edition. Divisions needing revision are listed and discussed. Librarians want continuous revision but they do not want numerical designation meanings changed. (Author/MBR)

  15. 40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...

  16. 40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...

  17. 40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...

  18. 40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...

  19. 40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...

  20. Properties of the Tent map for decimal fractions with fixed precision

    NASA Astrophysics Data System (ADS)

    Chetverikov, V. M.

    2018-01-01

    The one-dimensional discrete Tent map is a well-known example of a map whose fixed points are all unstable on the segment [0,1]. This map leads to the positivity of the Lyapunov exponent for the corresponding recurrent sequence. Therefore in a situation of general position, this sequence must demonstrate the properties of deterministic chaos. However if the first term of the recurrence sequence is taken as a decimal fraction with a fixed number “k” of digits after the decimal point and all calculations are carried out accurately, then the situation turns out to be completely different. In this case, first, the Tent map does not lead to an increase in significant digits in the terms of the sequence, and secondly, demonstrates the existence of a finite number of eventually periodic orbits, which are attractors for all other decimal numbers with the number of significant digits not exceeding “k”.

  1. Conceptual structure and the procedural affordances of rational numbers: relational reasoning with fractions and decimals.

    PubMed

    DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J

    2015-02-01

    The standard number system includes several distinct types of notations, which differ conceptually and afford different procedures. Among notations for rational numbers, the bipartite format of fractions (a/b) enables them to represent 2-dimensional relations between sets of discrete (i.e., countable) elements (e.g., red marbles/all marbles). In contrast, the format of decimals is inherently 1-dimensional, expressing a continuous-valued magnitude (i.e., proportion) but not a 2-dimensional relation between sets of countable elements. Experiment 1 showed that college students indeed view these 2-number notations as conceptually distinct. In a task that did not involve mathematical calculations, participants showed a strong preference to represent partitioned displays of discrete objects with fractions and partitioned displays of continuous masses with decimals. Experiment 2 provided evidence that people are better able to identify and evaluate ratio relationships using fractions than decimals, especially for discrete (or discretized) quantities. Experiments 3 and 4 found a similar pattern of performance for a more complex analogical reasoning task. When solving relational reasoning problems based on discrete or discretized quantities, fractions yielded greater accuracy than decimals; in contrast, when quantities were continuous, accuracy was lower for both symbolic notations. Whereas previous research has established that decimals are more effective than fractions in supporting magnitude comparisons, the present study reveals that fractions are relatively advantageous in supporting relational reasoning with discrete (or discretized) concepts. These findings provide an explanation for the effectiveness of natural frequency formats in supporting some types of reasoning, and have implications for teaching of rational numbers.

  2. Magnitude comparison with different types of rational numbers.

    PubMed

    DeWolf, Melissa; Grounds, Margaret A; Bassok, Miriam; Holyoak, Keith J

    2014-02-01

    An important issue in understanding mathematical cognition involves the similarities and differences between the magnitude representations associated with various types of rational numbers. For single-digit integers, evidence indicates that magnitudes are represented as analog values on a mental number line, such that magnitude comparisons are made more quickly and accurately as the numerical distance between numbers increases (the distance effect). Evidence concerning a distance effect for compositional numbers (e.g., multidigit whole numbers, fractions and decimals) is mixed. We compared the patterns of response times and errors for college students in magnitude comparison tasks across closely matched sets of rational numbers (e.g., 22/37, 0.595, 595). In Experiment 1, a distance effect was found for both fractions and decimals, but response times were dramatically slower for fractions than for decimals. Experiments 2 and 3 compared performance across fractions, decimals, and 3-digit integers. Response patterns for decimals and integers were extremely similar but, as in Experiment 1, magnitude comparisons based on fractions were dramatically slower, even when the decimals varied in precision (i.e., number of place digits) and could not be compared in the same way as multidigit integers (Experiment 3). Our findings indicate that comparisons of all three types of numbers exhibit a distance effect, but that processing often involves strategic focus on components of numbers. Fractions impose an especially high processing burden due to their bipartite (a/b) structure. In contrast to the other number types, the magnitude values associated with fractions appear to be less precise, and more dependent on explicit calculation. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Software Defined Radio (SDR) and Direct Digital Synthesizer (DDS) for NMR/MRI instruments at low-field.

    PubMed

    Asfour, Aktham; Raoof, Kosai; Yonnet, Jean-Paul

    2013-11-27

    A proof-of-concept of the use of a fully digital radiofrequency (RF) electronics for the design of dedicated Nuclear Magnetic Resonance (NMR) systems at low-field (0.1 T) is presented. This digital electronics is based on the use of three key elements: a Direct Digital Synthesizer (DDS) for pulse generation, a Software Defined Radio (SDR) for a digital receiving of NMR signals and a Digital Signal Processor (DSP) for system control and for the generation of the gradient signals (pulse programmer). The SDR includes a direct analog-to-digital conversion and a Digital Down Conversion (digital quadrature demodulation, decimation filtering, processing gain…). The various aspects of the concept and of the realization are addressed with some details. These include both hardware design and software considerations. One of the underlying ideas is to enable such NMR systems to "enjoy" from existing advanced technology that have been realized in other research areas, especially in telecommunication domain. Another goal is to make these systems easy to build and replicate so as to help research groups in realizing dedicated NMR desktops for a large palette of new applications. We also would like to give readers an idea of the current trends in this field. The performances of the developed electronics are discussed throughout the paper. First FID (Free Induction Decay) signals are also presented. Some development perspectives of our work in the area of low-field NMR/MRI will be finally addressed.

  4. Visual acuity testing in diabetic subjects: the decimal progression chart versus the Freiburg visual acuity test.

    PubMed

    Loumann Knudsen, Lars

    2003-08-01

    To study reproducibility and biological variation of visual acuity in diabetic maculopathy, using two different visual acuity tests, the decimal progression chart and the Freiburg visual acuity test. Twenty-two eyes in 11 diabetic subjects were examined several times within a 12-month period using both visual acuity tests. The most commonly used visual acuity test in Denmark (the decimal progression chart) was compared to the Freiburg visual acuity test (automated testing) in a paired study. Correlation analysis revealed agreement between the two methods (r(2)=0.79; slope=0.82; y-axis intercept=0.01). The mean visual acuity was found to be 15% higher (P<0.0001) with the decimal progression chart than with the Freiburg visual acuity test. The reproducibility was the same in both tests (coefficient of variation: 12% for each test); however, the variation within the 12-month examination period differed significantly. The coefficient of variation was 17% using the decimal progression chart, 35% with the Freiburg visual acuity test. The reproducibility of the two visual acuity tests is comparable under optimal testing conditions in diabetic subjects with macular oedema. However, it appears that the Freiburg visual acuity test is significantly better for detection of biological variation.

  5. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  6. FPGA-based Fused Smart Sensor for Real-Time Plant-Transpiration Dynamic Estimation

    PubMed Central

    Millan-Almaraz, Jesus Roberto; de Jesus Romero-Troncoso, Rene; Guevara-Gonzalez, Ramon Gerardo; Contreras-Medina, Luis Miguel; Carrillo-Serrano, Roberto Valentin; Osornio-Rios, Roque Alfredo; Duarte-Galvan, Carlos; Rios-Alcaraz, Miguel Angel; Torres-Pacheco, Irineo

    2010-01-01

    Plant transpiration is considered one of the most important physiological functions because it constitutes the plants evolving adaptation to exchange moisture with a dry atmosphere which can dehydrate or eventually kill the plant. Due to the importance of transpiration, accurate measurement methods are required; therefore, a smart sensor that fuses five primary sensors is proposed which can measure air temperature, leaf temperature, air relative humidity, plant out relative humidity and ambient light. A field programmable gate array based unit is used to perform signal processing algorithms as average decimation and infinite impulse response filters to the primary sensor readings in order to reduce the signal noise and improve its quality. Once the primary sensor readings are filtered, transpiration dynamics such as: transpiration, stomatal conductance, leaf-air-temperature-difference and vapor pressure deficit are calculated in real time by the smart sensor. This permits the user to observe different primary and calculated measurements at the same time and the relationship between these which is very useful in precision agriculture in the detection of abnormal conditions. Finally, transpiration related stress conditions can be detected in real time because of the use of online processing and embedded communications capabilities. PMID:22163656

  7. Classification in Australia.

    ERIC Educational Resources Information Center

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  8. The computation of pi to 29,360,000 decimal digits using Borweins' quartically convergent algorithm

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    The quartically convergent numerical algorithm developed by Borwein and Borwein (1987) for 1/pi is implemented via a prime-modulus-transform multiprecision technique on the NASA Ames Cray-2 supercomputer to compute the first 2.936 x 10 to the 7th digits of the decimal expansion of pi. The history of pi computations is briefly recalled; the most recent algorithms are characterized; the implementation procedures are described; and samples of the output listing are presented. Statistical analyses show that the present decimal expansion is completely random, with only acceptable numbers of long repeating strings and single-digit runs.

  9. Tracking Decimal Misconceptions: Strategic Instructional Choices

    ERIC Educational Resources Information Center

    Griffin, Linda B.

    2016-01-01

    Understanding the decimal system is challenging, requiring coordination of place-value concepts with features of whole-number and fraction knowledge (Moloney and Stacey 1997). Moreover, the learner must discern if and how previously learned concepts and procedures apply. The process is complex, and misconceptions will naturally arise. In a…

  10. Dewey Decimal Classification: A Quagmire.

    ERIC Educational Resources Information Center

    Gamaluddin, Ahmad Fouad

    1980-01-01

    A survey of 660 Pennsylvania school librarians indicates that, though there is limited professional interest in the Library of Congress Classification system, Dewey Decimal Classification (DDC) appears to be firmly entrenched. This article also discusses the relative merits of DDC, the need for a uniform system, librarianship preparation, and…

  11. MARC Coding of DDC for Subject Retrieval.

    ERIC Educational Resources Information Center

    Wajenberg, Arnold S.

    1983-01-01

    Recommends an expansion of MARC codes for decimal class numbers to enhance automated subject retrieval. Five values for a second indicator and two new subfields are suggested for encoding hierarchical relationships among decimal class numbers. Additional subfields are suggested to enhance retrieval through analysis of synthesized numbers in…

  12. Conversions Rock! Lessons & Worksheets to Build Skills in Equivalent Conversions. Poster/Teaching Guide. Expect the Unexpected with Math[R

    ERIC Educational Resources Information Center

    Actuarial Foundation, 2013

    2013-01-01

    "Welcome to Conversions Rock" is a new math program designed to build and reinforce the important skills of converting fractions, decimals, and percents for students in grades 6-8. Developed by The Actuarial Foundation, this program seeks to provide skill-building, real-world math to help students become successful in the classroom and beyond. [A…

  13. WWC Review of the Report "Does Working Memory Moderate the Effects of Fraction Intervention? An Aptitude-Treatment Interaction." What Works Clearinghouse Single Study Review

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2014

    2014-01-01

    The 2013 study, "Does Working Memory Moderate the Effects of Fraction Intervention? An Aptitude-Treatment Interaction," examined the impacts of the fluency and conceptual versions of "Fraction Face-Off!," a math instruction program designed to improve knowledge of fractions and decimals in fourth-graders at risk for low…

  14. 40 CFR 86.609-98 - Calculation and reporting of test results.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... decimal places contained in the applicable standard expressed to one additional significant figure... decimal places contained in the applicable emission standard expressed to one additional significant figure. Rounding is done in accordance with ASTM E 29-67, (reapproved 1980) (as referenced in § 86.094-28...

  15. Decipipes: Helping Students to "Get the Point"

    ERIC Educational Resources Information Center

    Moody, Bruce

    2011-01-01

    Decipipes are a representational model that can be used to help students develop conceptual understanding of decimal place value. They provide a non-standard tool for representing length, which in turn can be represented using conventional decimal notation. They are conceptually identical to Linear Arithmetic Blocks. This article reviews theory…

  16. 39 CFR 3055.65 - Special Services.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...

  17. 39 CFR 3055.65 - Special Services.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...

  18. 39 CFR 3055.65 - Special Services.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...

  19. 39 CFR 3055.65 - Special Services.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...

  20. Skylab

    NASA Image and Video Library

    1973-01-01

    This EREP photograph of the Uncompahgre Plateau area of Colorado illustrates the land use classification using the hierarchical numbering system to depict land forms and vegetative patterns. The numerator is a three-digit number with decimal components identifying the vegetation analog or land use conditions. The denominator uses a three-component decimal system for landscape characterization.

  1. Decimal Fraction Arithmetic: Logical Error Analysis and Its Validation.

    ERIC Educational Resources Information Center

    Standiford, Sally N.; And Others

    This report illustrates procedures of item construction for addition and subtraction examples involving decimal fractions. Using a procedural network of skills required to solve such examples, an item characteristic matrix of skills analysis was developed to describe the characteristics of the content domain by projected student difficulties. Then…

  2. Which Type of Rational Numbers Should Students Learn First?

    ERIC Educational Resources Information Center

    Tian, Jing; Siegler, Robert S.

    2017-01-01

    Many children and adults have difficulty gaining a comprehensive understanding of rational numbers. Although fractions are taught before decimals and percentages in many countries, including the USA, a number of researchers have argued that decimals are easier to learn than fractions and therefore teaching them first might mitigate children's…

  3. Sunny with a Chance of Tenths! Using the Familiar Context of Temperature to Support Teaching Decimals

    ERIC Educational Resources Information Center

    Beaman, Belinda

    2013-01-01

    As teachers we are encouraged to contextualize the mathematics that we teach. In this article, Belinda Beaman explains how she used the weather as a context for developing decimal understanding. We particularly enjoyed reading how the students were involved in estimating.

  4. Discovery

    ERIC Educational Resources Information Center

    de Mestre, Neville

    2010-01-01

    All common fractions can be written in decimal form. In this Discovery article, the author suggests that teachers ask their students to calculate the decimals by actually doing the divisions themselves, and later on they can use a calculator to check their answers. This article presents a lesson based on the research of Bolt (1982).

  5. Comparing Instructional Strategies for Integrating Conceptual and Procedural Knowledge.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Koedinger, Kenneth R.

    We compared alternative instructional strategies for integrating knowledge of decimal place value and regrouping concepts with procedures for adding and subtracting decimals. The first condition was based on recent research suggesting that conceptual and procedural knowledge develop in an iterative, hand over hand fashion. In this iterative…

  6. Conceptual Knowledge of Decimal Arithmetic

    ERIC Educational Resources Information Center

    Lortie-Forgues, Hugues; Siegler, Robert S.

    2016-01-01

    In two studies (N's = 55 and 54), we examined a basic form of conceptual understanding of rational number arithmetic, the direction of effect of decimal arithmetic operations, at a level of detail useful for informing instruction. Middle school students were presented tasks examining knowledge of the direction of effects (e.g., "True or…

  7. Design of Efficient Mirror Adder in Quantum- Dot Cellular Automata

    NASA Astrophysics Data System (ADS)

    Mishra, Prashant Kumar; Chattopadhyay, Manju K.

    2018-03-01

    Lower power consumption is an essential demand for portable multimedia system using digital signal processing algorithms and architectures. Quantum dot cellular automata (QCA) is a rising nano technology for the development of high performance ultra-dense low power digital circuits. QCA based several efficient binary and decimal arithmetic circuits are implemented, however important improvements are still possible. This paper demonstrate Mirror Adder circuit design in QCA. We present comparative study of mirror adder cells designed using conventional CMOS technique and mirror adder cells designed using quantum-dot cellular automata. QCA based mirror adders are better in terms of area by order of three.

  8. Sull'Integrazione delle Strutture Numeriche nella Scuola dell'Obbligo (Integrating Numerical Structures in Mandatory School).

    ERIC Educational Resources Information Center

    Bonotto, C.

    1995-01-01

    Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)

  9. 40 CFR 86.1823-01 - Durability demonstration procedures for exhaust emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (including both hardware and software) must be installed and operating for the entire mileage accumulation... decimal places) from the regression analysis; the result shall be rounded to three-decimal places of... less than one shall be changed to one for the purposes of this paragraph. (2) An additive DF will be...

  10. Why Is Learning Fraction and Decimal Arithmetic so Difficult?

    ERIC Educational Resources Information Center

    Lortie-Forgues, Hugues; Tian, Jing; Siegler, Robert S.

    2015-01-01

    Fraction and decimal arithmetic are crucial for later mathematics achievement and for ability to succeed in many professions. Unfortunately, these capabilities pose large difficulties for many children and adults, and students' proficiency in them has shown little sign of improvement over the past three decades. To summarize what is known about…

  11. Ambiguity in Units and the Referents: Two Cases in Rational Number Operations

    ERIC Educational Resources Information Center

    Rathouz, Margaret

    2010-01-01

    I explore the impact of ambiguous referral to the unit on understanding of decimal and fraction operations during episodes in two different mathematics courses for pre-service teachers (PSTs). In one classroom, the instructor introduces a rectangular area diagram to help the PSTs visualize decimal multiplication. A transcript from this classroom…

  12. Dewey Decimal Classification for U. S. Conn: An Advantage?

    ERIC Educational Resources Information Center

    Marek, Kate

    This paper examines the use of the Dewey Decimal Classification (DDC) system at the U. S. Conn Library at Wayne State College (WSC) in Nebraska. Several developments in the last 20 years which have eliminated the trend toward reclassification of academic library collections from DDC to the Library of Congress (LC) classification scheme are…

  13. Modeling discrete and continuous entities with fractions and decimals.

    PubMed

    Rapp, Monica; Bassok, Miriam; DeWolf, Melissa; Holyoak, Keith J

    2015-03-01

    When people use mathematics to model real-life situations, their use of mathematical expressions is often mediated by semantic alignment (Bassok, Chase, & Martin, 1998): The entities in a problem situation evoke semantic relations (e.g., tulips and vases evoke the functionally asymmetric "contain" relation), which people align with analogous mathematical relations (e.g., the noncommutative division operation, tulips/vases). Here we investigate the possibility that semantic alignment is also involved in the comprehension and use of rational numbers (fractions and decimals). A textbook analysis and results from two experiments revealed that both mathematic educators and college students tend to align the discreteness versus continuity of the entities in word problems (e.g., marbles vs. distance) with distinct symbolic representations of rational numbers--fractions versus decimals, respectively. In addition, fractions and decimals tend to be used with nonmetric units and metric units, respectively. We discuss the importance of the ontological distinction between continuous and discrete entities to mathematical cognition, the role of symbolic notations, and possible implications of our findings for the teaching of rational numbers. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  14. Rational-number comparison across notation: Fractions, decimals, and whole numbers.

    PubMed

    Hurst, Michelle; Cordes, Sara

    2016-02-01

    Although fractions, decimals, and whole numbers can be used to represent the same rational-number values, it is unclear whether adults conceive of these rational-number magnitudes as lying along the same ordered mental continuum. In the current study, we investigated whether adults' processing of rational-number magnitudes in fraction, decimal, and whole-number notation show systematic ratio-dependent responding characteristic of an integrated mental continuum. Both reaction time (RT) and eye-tracking data from a number-magnitude comparison task revealed ratio-dependent performance when adults compared the relative magnitudes of rational numbers, both within the same notation (e.g., fractions vs. fractions) and across different notations (e.g., fractions vs. decimals), pointing to an integrated mental continuum for rational numbers across notation types. In addition, eye-tracking analyses provided evidence of an implicit whole-number bias when we compared values in fraction notation, and individual differences in this whole-number bias were related to the individual's performance on a fraction arithmetic task. Implications of our results for both cognitive development research and math education are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model

    NASA Astrophysics Data System (ADS)

    Wang, Weijie; Lu, Yanmin

    2018-03-01

    Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.

  16. Advanced image fusion algorithms for Gamma Knife treatment planning. Evaluation and proposal for clinical use.

    PubMed

    Apostolou, N; Papazoglou, Th; Koutsouris, D

    2006-01-01

    Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.

  17. Computers: Yesterday, Today & Tomorrow.

    DTIC Science & Technology

    1986-04-07

    these repetitive calculations, he progressed through several scientific stages. THE ABACUS Invented more than 4,000 years ago, the abacus is considered...by many to have been the world’s first digital calculator. It uses beads and positional values to represent quantities. The abacus served as man’s...Pascal’s mathematical digital calculator, designed around the concept of serially connected decimal counting gears. These gears were interconnected in a 10

  18. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  19. Decimals Are Not Processed Automatically, Not Even as Being Smaller than One

    ERIC Educational Resources Information Center

    Kallai, Arava Y.; Tzelgov, Joseph

    2014-01-01

    Common fractions have been found to be processed intentionally but not automatically, which led to the conclusion that they are not represented holistically in long-term memory. However, decimals are more similar to natural numbers in their form and thus might be better candidates to be holistically represented by educated adults. To test this…

  20. 20 CFR 345.302 - Definition of terms and phrases used in experience-rating.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... for the current calendar year. This ratio is computed to four decimal places. (k) Pooled credit ratio... employer for the calendar year involved in the computation. This ratio is computed to four decimal places... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Definition of terms and phrases used in...

  1. An Exploratory Study of Fifth-Grade Students' Reasoning about the Relationship between Fractions and Decimals When Using Number Line-Based Virtual Manipulatives

    ERIC Educational Resources Information Center

    Smith, Scott

    2017-01-01

    Understanding the relationship between fractions and decimals is an important step in developing an overall understanding of rational numbers. Research has demonstrated the feasibility of technology in the form of virtual manipulatives for facilitating students' meaningful understanding of rational number concepts. This exploratory dissertation…

  2. Epistemic Trust and Education: Effects of Informant Reliability on Student Learning of Decimal Concepts

    ERIC Educational Resources Information Center

    Durkin, Kelley; Shafto, Patrick

    2016-01-01

    The epistemic trust literature emphasizes that children's evaluations of informants' trustworthiness affects learning, but there is no evidence that epistemic trust affects learning in academic domains. The current study investigated how reliability affects decimal learning. Fourth and fifth graders (N = 122; M[subscript age] = 10.1 years)…

  3. Cancer Therapy (Preclinical and Clinical): A Decimal Classification, (Categories 51.1, 51.2, and 51.3).

    ERIC Educational Resources Information Center

    Schneider, John H.

    This hierarchical decimal classification of information related to cancer therapy in humans and animals (preceeded by a few general categories) is a working draft of categories taken from an extensive classification of biomedical information. Because the classification identifies very small areas of cancer information, it can be used for precise…

  4. MATHEMATICAL CONSTANTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, H.P.; Potter, Elinor

    1971-03-01

    This collection of mathematical data consists of two tables of decimal constants arranged according to size rather than function, a third table of integers from 1 to 1000, giving some of their properties, and a fourth table listing some infinite series arranged according to increasing size of the coefficients of the terms. The decimal values of Tables I and II are given to 20 D.

  5. Individualized Math Problems in Decimals. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    THis is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume concern use of decimals and are related to the…

  6. A design method for high performance seismic data acquisition based on oversampling delta-sigma modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shanghua; Xue, Bing

    2017-04-01

    The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.

  7. Robust adaptive 3-D segmentation of vessel laminae from fluorescence confocal microscope images and parallel GPU implementation.

    PubMed

    Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath

    2010-03-01

    This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.

  8. An Efficient, Highly Flexible Multi-Channel Digital Downconverter Architecture

    NASA Technical Reports Server (NTRS)

    Goodhart, Charles E.; Soriano, Melissa A.; Navarro, Robert; Trinh, Joseph T.; Sigman, Elliott H.

    2013-01-01

    In this innovation, a digital downconverter has been created that produces a large (16 or greater) number of output channels of smaller bandwidths. Additionally, this design has the flexibility to tune each channel independently to anywhere in the input bandwidth to cover a wide range of output bandwidths (from 32 MHz down to 1 kHz). Both the flexibility in channel frequency selection and the more than four orders of magnitude range in output bandwidths (decimation rates from 32 to 640,000) presented significant challenges to be solved. The solution involved breaking the digital downconversion process into a two-stage process. The first stage is a 2 oversampled filter bank that divides the whole input bandwidth as a real input signal into seven overlapping, contiguous channels represented with complex samples. Using the symmetry of the sine and cosine functions in a similar way to that of an FFT (fast Fourier transform), this downconversion is very efficient and gives seven channels fixed in frequency. An arbitrary number of smaller bandwidth channels can be formed from second-stage downconverters placed after the first stage of downconversion. Because of the overlapping of the first stage, there is no gap in coverage of the entire input bandwidth. The input to any of the second-stage downconverting channels has a multiplexer that chooses one of the seven wideband channels from the first stage. These second-stage downconverters take up fewer resources because they operate at lower bandwidths than doing the entire downconversion process from the input bandwidth for each independent channel. These second-stage downconverters are each independent with fine frequency control tuning, providing extreme flexibility in positioning the center frequency of a downconverted channel. Finally, these second-stage downconverters have flexible decimation factors over four orders of magnitude The algorithm was developed to run in an FPGA (field programmable gate array) at input data sampling rates of up to 1,280 MHz. The current implementation takes a 1,280-MHz real input, and first breaks it up into seven 160-MHz complex channels, each spaced 80 MHz apart. The eighth channel at baseband was not required for this implementation, and led to more optimization. Afterwards, 16 second stage narrow band channels with independently tunable center frequencies and bandwidth settings are implemented A future implementation in a larger Xilinx FPGA will hold up to 32 independent second-stage channels.

  9. Cancer Biochemistry and Host-Tumor Interactions: A Decimal Classification, (Categories 51.6, 51.7, and 51.8).

    ERIC Educational Resources Information Center

    Schneider, John H.

    This is a hierarchical decimal classification of information related to cancer biochemistry, to host-tumor interactions (including cancer immunology), and to occurrence of cancer in special types of animals and plants. It is a working draft of categories taken from an extensive classification of many fields of biomedical information. Because the…

  10. Assessment of the Knowledge of the Decimal Number System Exhibited by Students with Down Syndrome

    ERIC Educational Resources Information Center

    Noda, Aurelia; Bruno, Alicia

    2017-01-01

    This paper presents an assessment of the understanding of the decimal numeral system in students with Down Syndrome (DS). We followed a methodology based on a descriptive case study involving six students with DS. We used a framework of four constructs (counting, grouping, partitioning and numerical relationships) and five levels of thinking for…

  11. Origini Concettuali di Errori che si Riscontrano Nel Confrontare Numeri Decimali e Frazioni=Conceptual Sources of Difficulties Concerning the Ordering of Decimal Numbers and the Comparison of Fractions.

    ERIC Educational Resources Information Center

    Bonotto, C.

    1993-01-01

    Examined fifth-grade students' survey responses to investigate incorrect rules that derive from children's efforts to interpret decimals as integers or as fractions. Regarding fractions, difficulties arise because only the whole-part approach to fractions is presented in elementary school. (Author/MDH)

  12. Identify Fractions and Decimals on a Number Line

    ERIC Educational Resources Information Center

    Shaughnessy, Meghan M.

    2011-01-01

    Tasks that ask students to label rational number points on a number line are common not only in curricula in the upper elementary school grades but also on state assessments. Such tasks target foundational rational number concepts: A fraction (or a decimal) is more than a shaded part of an area, a part of a pizza, or a representation using…

  13. Rationals and Decimals as Required in the School Curriculum Part 3. Rationals and Decimals as Linear Functions

    ERIC Educational Resources Information Center

    Brousseau, Guy; Brousseau, Nadine; Warfield, Virginia

    2008-01-01

    In the late seventies, Guy Brousseau set himself the goal of verifying experimentally a theory he had been building up for a number of years. The theory, consistent with what was later named (non-radical) constructivism, was that children, in suitable carefully arranged circumstances, can build their own knowledge of mathematics. The experiment,…

  14. Rationals and Decimals as Required in the School Curriculum Part 2: From Rationals to Decimals

    ERIC Educational Resources Information Center

    Brousseau, Guy; Brousseau, Nadine; Warfield, Virginia

    2007-01-01

    In the late seventies, Guy Brousseau set himself the goal of verifying experimentally a theory he had been building up for a number of years. The theory, consistent with what was later named (non-radical) constructivism, was that children, in suitable carefully arranged circumstances, can build their own knowledge of mathematics. The experiment,…

  15. 49 CFR 565.15 - Content requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... = 8 J = 1 K = 2 L = 3 M = 4 N = 5 P = 7 R = 9 S = 2 T = 3 U = 4 V = 5 W = 6 X = 7 Y = 8 Z = 9 (2... Decimal Equivalent Remainder as reflected in Table V. All Decimal Equivalent Remainders in Table V are... in VIN position nine (9). Table V—Ninth Position Check Digit Values [Rounded to the nearest...

  16. Understanding Decimal Proportions: Discrete Representations, Parallel Access, and Privileged Processing of Zero

    ERIC Educational Resources Information Center

    Varma, Sashank; Karl, Stacy R.

    2013-01-01

    Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions--rational numbers between 0 and 1 expressed in the place-value symbol system. The results…

  17. A high-speed on-chip pseudo-random binary sequence generator for multi-tone phase calibration

    NASA Astrophysics Data System (ADS)

    Gommé, Liesbeth; Vandersteen, Gerd; Rolain, Yves

    2011-07-01

    An on-chip reference generator is conceived by adopting the technique of decimating a pseudo-random binary sequence (PRBS) signal in parallel sequences. This is of great benefit when high-speed generation of PRBS and PRBS-derived signals is the objective. The design implemented standard CMOS logic is available in commercial libraries to provide the logic functions for the generator. The design allows the user to select the periodicity of the PRBS and the PRBS-derived signals. The characterization of the on-chip generator marks its performance and reveals promising specifications.

  18. Reclassification: Rationale and Problems; Proceedings of a Conference on Reclassification held at the Center of Adult Education, University of Maryland, College Park, April 4 to 6, 1968.

    ERIC Educational Resources Information Center

    Perreault, Jean M., Ed.

    Several factors are involved in the decision to reclassify library collections and several problems and choices must be faced. The discussion of four classification schemes (Dewey Decimal, Library of Congress, Library of Congress subject-headings and Universal Decimal Classification) involved in the choices concerns their structure, currency,…

  19. Evaluation of the Retrieval of Nuclear Science Document References Using the Universal Decimal Classification as the Indexing Language for a Computer-Based System

    ERIC Educational Resources Information Center

    Atherton, Pauline; And Others

    A single issue of Nuclear Science Abstracts, containing about 2,300 abstracts, was indexed by Universal Decimal Classification (UDC) using the Special Subject Edition of UDC for Nuclear Science and Technology. The descriptive cataloging and UDC-indexing records formed a computer-stored data base. A systematic random sample of 500 additional…

  20. Delayed Learning Effects with Erroneous Examples: A Study of Learning Decimals with a Web-Based Tutor

    ERIC Educational Resources Information Center

    McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.

    2015-01-01

    Erroneous examples--step-by-step problem solutions with one or more errors for students to find and fix--hold great potential to help students learn. In this study, which is a replication of a prior study (Adams et al. 2014), but with a much larger population (390 vs. 208), middle school students learned about decimals either by working with…

  1. New Perspectives for Didactical Engineering: An Example for the Development of a Resource for Teaching Decimal Number System

    ERIC Educational Resources Information Center

    Tempier, Frédérick

    2016-01-01

    Many studies have shown the difficulties of learning and teaching the decimal number system for whole numbers. In the case of numbers bigger than one hundred, complexity is partly due to the multitude of possible relationships between units. This study was aimed to develop conditions of a resource which can help teachers to enhance their teaching…

  2. Dewey Decimal Classification Online Project: Interim Reports to the Council on Library Resources, April 1984, September 1984, and February 1985.

    ERIC Educational Resources Information Center

    Markey, Karen; Demeyer, Anh N.

    This research project focuses on the implementation and testing of the Dewey Decimal Classification (DDC) system as an online searcher's tool for subject access, browsing, and display in an online catalog. The research project comprises 12 activities. The three interim reports in this document cover the first seven of these activities: (1) obtain…

  3. A New Property of Repeating Decimals

    ERIC Educational Resources Information Center

    Arledge, Jane; Tekansik, Sarah

    2008-01-01

    As extended by Ginsberg, Midi's theorem says that if the repeated section of a decimal expansion of a prime is split into appropriate blocks and these are added, the result is a string of nines. We show that if the expansion of 1/p[superscript n+1] is treated the same way, instead of being a string of nines, the sum is related to the period of…

  4. Dewey Decimal Classification Online Project: Evaluation of a Library Schedule and Index Integrated into the Subject Searching Capabilities of an Online Catalog. Final Report.

    ERIC Educational Resources Information Center

    Markey, Karen; Demeyer, Anh N.

    In this research project, subject terms from the Dewey Decimal Classification (DDC) Schedules and Relative Index were incorporated into an online catalog as searcher's tools for subject access, browsing, and display. Four features of the DDC were employed to help searchers browse for and match their own subject terms with the online catalog's…

  5. Proceedings of the 27th International Group for the Psychology of Mathematics Education Conference Held Jointly with the 25th PME-NA Conference (Honolulu, Hawaii, July 13-18, 2003). Volume 4

    ERIC Educational Resources Information Center

    Pateman, Neil A., Ed; Dougherty, Barbara J., Ed.; Zilliox, Joseph T., Ed.

    2003-01-01

    This volume of the 27th International Group for the Psychology of Mathematics Education Conference includes the following research reports: (1) Improving Decimal Number Conception by Transfer from Fractions to Decimals (Irita Peled and Juhaina Awawdy Shahbari); (2) The Development of Student Teachers' Efficacy Beliefs in Mathematics during…

  6. Research of future network with multi-layer IP address

    NASA Astrophysics Data System (ADS)

    Li, Guoling; Long, Zhaohua; Wei, Ziqiang

    2018-04-01

    The shortage of IP addresses and the scalability of routing systems [1] are challenges for the Internet. The idea of dividing existing IP addresses between identities and locations is one of the important research directions. This paper proposed a new decimal network architecture based on IPv9 [11], and decimal network IP address from E.164 principle of traditional telecommunication network, the IP address level, which helps to achieve separation and identification and location of IP address, IP address form a multilayer network structure, routing scalability problem in remission at the same time, to solve the problem of IPv4 address depletion. On the basis of IPv9, a new decimal network architecture is proposed, and the IP address of the decimal network draws on the E.164 principle of the traditional telecommunication network, and the IP addresses are hierarchically divided, which helps to realize the identification and location separation of IP addresses, the formation of multi-layer IP address network structure, while easing the scalability of the routing system to find a way out of IPv4 address exhausted. In addition to modifying DNS [10] simply and adding the function of digital domain, a DDNS [12] is formed. At the same time, a gateway device is added, that is, IPV9 gateway. The original backbone network and user network are unchanged.

  7. Effect of input data variability on estimations of the equivalent constant temperature time for microbial inactivation by HTST and retort thermal processing.

    PubMed

    Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo

    2011-08-01

    Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®

  8. 31 CFR 356.12 - What are the different types of bids and do they have specific requirements or restrictions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the specific security being auctioned; (ii) For which the security being auctioned is one of several... increments. The third decimal must be either a zero or a five, for example, 5.320 or 5.325. We will treat any missing decimals as zero, for example, a bid of 5.32 will be treated as 5.320. The rate bid may be a...

  9. Classification Schedules as Subject Enhancement in Online Catalogs. A Review of a Conference Sponsored by Forest Press, the OCLC Online Computer Library Center, and the Council on Library Resources.

    ERIC Educational Resources Information Center

    Mandel, Carol A.

    This paper presents a synthesis of the ideas and issues developed at a conference convened to review the results of the Dewey Decimal Classification Online Project and explore the potential for future use of the Dewey Decimal Classification (DDC) and Library of Congress Classification (LCC) schedules in online library catalogs. Conference…

  10. Sharable Courseware Object Reference Model (SCORM), Version 1.0

    DTIC Science & Technology

    2000-07-01

    or query tool may provide the top- level entries of a well-established classification (LOC, UDC, DDC , and so forth). SEL 9.2.2 Taxon This subcategory...YYYY/MM/DD. CMIFeedback Structured description of student response in an interaction. CMIDecimal Number which may have a decimal point. If not...Seconds shall contain 2 digits with an optional decimal point and additional digits. CMITimespan A length of time in hours, minutes, and seconds

  11. Preservice Teachers' Understanding of the Relation between a Fraction or Integer and its Decimal Expansion: The Case of 0.9 and 1

    ERIC Educational Resources Information Center

    Dubinsky, Ed; Arnon, Ilana; Weller, Kirk

    2013-01-01

    In this article, we obtain a genetic decomposition of students' progress in developing an understanding of the decimal 0.9 and its relation to 1. The genetic decomposition appears to be valid for a high percentage of the study participants and suggests the possibility of a new stage in APOS Theory that would be the first substantial change in…

  12. Input Decimated Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.

  13. A Fully Integrated Sensor SoC with Digital Calibration Hardware and Wireless Transceiver at 2.4 GHz

    PubMed Central

    Kim, Dong-Sun; Jang, Sung-Joon; Hwang, Tae-Ho

    2013-01-01

    A single-chip sensor system-on-a-chip (SoC) that implements radio for 2.4 GHz, complete digital baseband physical layer (PHY), 10-bit sigma-delta analog-to-digital converter and dedicated sensor calibration hardware for industrial sensing systems has been proposed and integrated in a 0.18-μm CMOS technology. The transceiver's building block includes a low-noise amplifier, mixer, channel filter, receiver signal-strength indicator, frequency synthesizer, voltage-controlled oscillator, and power amplifier. In addition, the digital building block consists of offset quadrature phase-shift keying (OQPSK) modulation, demodulation, carrier frequency offset compensation, auto-gain control, digital MAC function, sensor calibration hardware and embedded 8-bit microcontroller. The digital MAC function supports cyclic redundancy check (CRC), inter-symbol timing check, MAC frame control, and automatic retransmission. The embedded sensor signal processing block consists of calibration coefficient calculator, sensing data calibration mapper and sigma-delta analog-to-digital converter with digital decimation filter. The sensitivity of the overall receiver and the error vector magnitude (EVM) of the overall transmitter are −99 dBm and 18.14%, respectively. The proposed calibration scheme has a reduction of errors by about 45.4% compared with the improved progressive polynomial calibration (PPC) method and the maximum current consumption of the SoC is 16 mA. PMID:23698271

  14. Control of Byssochlamys and Related Heat-resistant Fungi in Grape Products

    PubMed Central

    King, A. Douglas; Michener, H. David; Ito, Keith A.

    1969-01-01

    Heat-resistant strains of Byssochlamys fulva, B. nivea, and other heat-resistant fungi were isolated from vineyard soil, grapes, grape-processing lines, and waste pomace. They are known to remain in grape juice occasionally and to grow in grape juice products. Ascospores of these fungi have a D value (decimal reduction time) of about 10 min at 190 F (88 C), but in the presence of 90 μliters of SO2 per liter (normally added to the juice) the D value was cut in half. Filtration through a commercial diatomaceous filter aid (also a common processing step) entrapped all but about 0.001% of experimentally added spores. Thus, heat in the presence of SO2 and filtration together can reduce the population of these spores by several orders of magnitude. Growth was also prevented by benzoate or sorbate in low concentrations. Oxygen must be reduced to extremely low levels before lack of oxygen limits growth. Images PMID:16349856

  15. FILTSoft: A computational tool for microstrip planar filter design

    NASA Astrophysics Data System (ADS)

    Elsayed, M. H.; Abidin, Z. Z.; Dahlan, S. H.; Cholan N., A.; Ngu, Xavier T. I.; Majid, H. A.

    2017-09-01

    Filters are key component of any communication system to control spectrum and suppress interferences. Designing a filter involves long process as well as good understanding of the basic hardware technology. Hence this paper introduces an automated design tool based on Matlab-GUI, called the FILTSoft (acronym for Filter Design Software) to ease the process. FILTSoft is a user friendly filter design tool to aid, guide and expedite calculations from lumped elements level to microstrip structure. Users just have to provide the required filter specifications as well as the material description. FILTSoft will calculate and display the lumped element details, the planar filter structure, and the expected filter's response. An example of a lowpass filter design was calculated using FILTSoft and the results were validated through prototype measurement for comparison purposes.

  16. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  17. Understanding decimal proportions: discrete representations, parallel access, and privileged processing of zero.

    PubMed

    Varma, Sashank; Karl, Stacy R

    2013-05-01

    Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions--rational numbers between 0 and 1 expressed in the place-value symbol system. The results demonstrate that proportions are represented as discrete structures and processed in parallel. There was a semantic interference effect: When understanding a proportion expression (e.g., "0.29"), both the correct proportion referent (e.g., 0.29) and the incorrect natural number referent (e.g., 29) corresponding to the visually similar natural number expression (e.g., "29") are accessed in parallel, and when these referents lead to conflicting judgments, performance slows. There was also a syntactic interference effect, generalizing the unit-decade compatibility effect for natural numbers: When comparing two proportions, their tenths and hundredths components are processed in parallel, and when the different components lead to conflicting judgments, performance slows. The results also reveal that zero decimals--proportions ending in zero--serve multiple cognitive functions, including eliminating semantic interference and speeding processing. The current research also extends the distance, semantic congruence, and SNARC effects from natural numbers to decimal proportions. These findings inform how people understand the place-value symbol system, and the mental implementation of mathematical symbol systems more generally. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. An RC active filter design handbook

    NASA Technical Reports Server (NTRS)

    Deboo, G. J.

    1977-01-01

    The design of filters is described. Emphasis is placed on simplified procedures that can be used by the reader who has minimum knowledge about circuit design and little acquaintance with filter theory. The handbook has three main parts. The first part is a review of some information that is essential for work with filters. The second part includes design information for specific types of filter circuitry and describes simple procedures for obtaining the component values for a filter that will have a desired set of characteristics. Pertinent information relating to actual performance is given. The third part (appendix) is a review of certain topics in filter theory and is intended to provide some basic understanding of how filters are designed.

  19. Metal-Poor, Strongly Star-Forming Galaxies in the DEEP2 Survey: The Relationship Between Stellar Mass, Temperature-Based Metallicity, and Star Formation Rate

    NASA Technical Reports Server (NTRS)

    Ly, Chun; Rigby, Jane R.; Cooper, Michael; Yan, Renbin

    2015-01-01

    We report on the discovery of 28 redshift (z) approximately 0.8 metal-poor galaxies in DEEP2. These galaxies were selected for their detection of the weak [O (sub III)] lambda 4363 emission line, which provides a "direct" measure of the gas-phase metallicity. A primary goal for identifying these rare galaxies is to examine whether the fundamental metallicity relation (FMR) between stellar mass, gas metallicity, and star formation rate (SFR) extends to low stellar mass and high SFR. The FMR suggests that higher SFR galaxies have lower metallicity (at fixed stellar mass). To test this trend, we combine spectroscopic measurements of metallicity and dust-corrected SFRs, with stellar mass estimates from modeling the optical photometry. We find that these galaxies are 1.05 plus or minus 0.61 decimal exponent (dex) above the redshift (z) approximately equal to 1 stellar mass-SFR relation, and 0.23 plus or minus 0.23 decimal exponent (dex) below the local mass-metallicity relation. Relative to the FMR, the latter offset is reduced to 0.01 decimal exponent (dex), but significant dispersion remains (0.29 decimal exponent (dex) with 0.16 decimal exponent (dex) due to measurement uncertainties). This dispersion suggests that gas accretion, star formation and chemical enrichment have not reached equilibrium in these galaxies. This is evident by their short stellar mass doubling timescale of approximately 100 (sup plus 310) (sub minus 75) million years that suggests stochastic star formation. Combining our sample with other redshift (z) of approximately 1 metal-poor galaxies, we find a weak positive SFR-metallicity dependence (at fixed stellar mass) that is significant at 97.3 percent confidence. We interpret this positive correlation as recent star formation that has enriched the gas, but has not had time to drive the metal-enriched gas out with feedback mechanisms.

  20. Radioastronomic signal processing cores for the SKA radio telescope

    NASA Astrophysics Data System (ADS)

    Comorett, G.; Chiarucc, S.; Belli, C.

    Modern radio telescopes require the processing of wideband signals, with sample rates from tens of MHz to tens of GHz, and are composed from hundreds up to a million of individual antennas. Digital signal processing of these signals include digital receivers (the digital equivalent of the heterodyne receiver), beamformers, channelizers, spectrometers. FPGAs present the advantage of providing a relatively low power consumption, relative to GPUs or dedicated computers, a wide signal data path, and high interconnectivity. Efficient algorithms have been developed for these applications. Here we will review some of the signal processing cores developed for the SKA telescope. The LFAA beamformer/channelizer architecture is based on an oversampling channelizer, where the channelizer output sampling rate and channel spacing can be set independently. This is useful where an overlap between adjacent channels is required to provide an uniform spectral coverage. The architecture allows for an efficient and distributed channelization scheme, with a final resolution corresponding to a million of spectral channels, minimum leakage and high out-of-band rejection. An optimized filter design procedure is used to provide an equiripple response with a very large number of spectral channels. A wideband digital receiver has been designed in order to select the processed bandwidth of the SKA Mid receiver. The receiver extracts a 2.5 MHz bandwidth form a 14 GHz input bandwidth. The design allows for non-integer ratios between the input and output sampling rates, with a resource usage comparable to that of a conventional decimating digital receiver. Finally, some considerations on quantization of radioastronomic signals are presented. Due to the stochastic nature of the signal, quantization using few data bits is possible. Good accuracies and dynamic range are possible even with 2-3 bits, but the nonlinearity in the correlation process must be corrected in post-processing. With at least 6 bits it is possible to have a very linear response of the instrument, with nonlinear terms below 80 dB, providing the signal amplitude is kept within bounds.

  1. Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.

    DTIC Science & Technology

    1983-06-01

    Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of

  2. Introducing passive acoustic filter in acoustic based condition monitoring: Motor bike piston-bore fault identification

    NASA Astrophysics Data System (ADS)

    Jena, D. P.; Panigrahi, S. N.

    2016-03-01

    Requirement of designing a sophisticated digital band-pass filter in acoustic based condition monitoring has been eliminated by introducing a passive acoustic filter in the present work. So far, no one has attempted to explore the possibility of implementing passive acoustic filters in acoustic based condition monitoring as a pre-conditioner. In order to enhance the acoustic based condition monitoring, a passive acoustic band-pass filter has been designed and deployed. Towards achieving an efficient band-pass acoustic filter, a generalized design methodology has been proposed to design and optimize the desired acoustic filter using multiple filter components in series. An appropriate objective function has been identified for genetic algorithm (GA) based optimization technique with multiple design constraints. In addition, the sturdiness of the proposed method has been demonstrated in designing a band-pass filter by using an n-branch Quincke tube, a high pass filter and multiple Helmholtz resonators. The performance of the designed acoustic band-pass filter has been shown by investigating the piston-bore defect of a motor-bike using engine noise signature. On the introducing a passive acoustic filter in acoustic based condition monitoring reveals the enhancement in machine learning based fault identification practice significantly. This is also a first attempt of its own kind.

  3. Signed-negabinary-arithmetic-based optical computing by use of a single liquid-crystal-display panel.

    PubMed

    Datta, Asit K; Munshi, Soumika

    2002-03-10

    Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.

  4. [Resistance of Listeria monocytogenes to physical exposure].

    PubMed

    Augustin, J C

    1996-11-01

    The resistance of Listeria monocytogenes to physical processing, particularly heat resistance and radioresistance, is widely dependent on the method involved, the physiological state of the strain used, and, obviously, the substrate in which the organism is. HTST pasteurization of milk would allow at least 11 decimal reductions of the potentially present population of L. monocytogenes, and then greatly minimizes the risks of survival of the organism. On the other hand, high and low pasteurizations of egg products may involve only 4 to 5 decimal reductions and appear then not very reliable towards Listeria. Similarly, meat products cooking can, in some conditions, be inadequate to allow the total inactivation of contaminant L. monocytogenes. A 3 kGy irradiation of meat products should allow, on an average, 6 decimal reductions. These results must incite the manufacturers to take into account factors present in their products which allow L. monocytogenes to better resist and this in order to adapt processing to these conditions of increased resistance.

  5. Regenerative particulate filter development

    NASA Technical Reports Server (NTRS)

    Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

    1972-01-01

    Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

  6. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  7. Independent divergence of 13- and 17-y life cycles among three periodical cicada lineages.

    PubMed

    Sota, Teiji; Yamamoto, Satoshi; Cooley, John R; Hill, Kathy B R; Simon, Chris; Yoshimura, Jin

    2013-04-23

    The evolution of 13- and 17-y periodical cicadas (Magicicada) is enigmatic because at any given location, up to three distinct species groups (Decim, Cassini, Decula) with synchronized life cycles are involved. Each species group is divided into one 13- and one 17-y species with the exception of the Decim group, which contains two 13-y species-13-y species are Magicicada tredecim, Magicicada neotredecim, Magicicada tredecassini, and Magicicada tredecula; and 17-y species are Magicicada septendecim, Magicicada cassini, and Magicicada septendecula. Here we show that the divergence leading to the present 13- and 17-y populations differs considerably among the species groups despite the fact that each group exhibits strikingly similar phylogeographic patterning. The earliest divergence of extant lineages occurred ∼4 Mya with one branch forming the Decim species group and the other subsequently splitting 2.5 Mya to form the Cassini and Decula species groups. The earliest split of extant lineages into 13- and 17-y life cycles occurred in the Decim lineage 0.5 Mya. All three species groups experienced at least one episode of life cycle divergence since the last glacial maximum. We hypothesize that despite independent origins, the three species groups achieved their current overlapping distributions because life-cycle synchronization of invading congeners to a dominant resident population enabled escape from predation and population persistence. The repeated life-cycle divergences supported by our data suggest the presence of a common genetic basis for the two life cycles in the three species groups.

  8. A Multi Directional Perfect Reconstruction Filter Bank Designed with 2-D Eigenfilter Approach: Application to Ultrasound Speckle Reduction.

    PubMed

    Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S

    2017-02-01

    B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.

  9. Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking.

    PubMed

    Hartgerink, Chris H J

    2017-01-01

    Head et al. (2015) provided a large collection of p -values that, from their perspective, indicates widespread statistical significance seeking (i.e., p -hacking). This paper inspects this result for robustness. Theoretically, the p -value distribution should be a smooth, decreasing function, but the distribution of reported p -values shows systematically more reported p -values for .01, .02, .03, .04, and .05 than p -values reported to three decimal places, due to apparent tendencies to round p -values to two decimal places. Head et al. (2015) correctly argue that an aggregate p -value distribution could show a bump below .05 when left-skew p -hacking occurs frequently. Moreover, the elimination of p  = .045 and p  = .05, as done in the original paper, is debatable. Given that eliminating p  = .045 is a result of the need for symmetric bins and systematically more p -values are reported to two decimal places than to three decimal places, I did not exclude p  = .045 and p  = .05. I conducted Fisher's method .04 <  p  < .05 and reanalyzed the data by adjusting the bin selection to .03875 <  p  ≤ .04 versus .04875 <  p  ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p -hacking remains when we look at the entire range between .04 <  p  < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p -values. Moreover, inspecting the bins that include two-decimal reported p -values potentially increases sensitivity if strategic rounding down of p -values as a form of p -hacking is widespread. Given the far-reaching implications of supposed widespread p -hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p -hacking is found in this reanalysis, this does not mean that there is no p -hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p -hacking is ambiguous at best.

  10. Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking

    PubMed Central

    2017-01-01

    Head et al. (2015) provided a large collection of p-values that, from their perspective, indicates widespread statistical significance seeking (i.e., p-hacking). This paper inspects this result for robustness. Theoretically, the p-value distribution should be a smooth, decreasing function, but the distribution of reported p-values shows systematically more reported p-values for .01, .02, .03, .04, and .05 than p-values reported to three decimal places, due to apparent tendencies to round p-values to two decimal places. Head et al. (2015) correctly argue that an aggregate p-value distribution could show a bump below .05 when left-skew p-hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p-values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher’s method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p-hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p-values. Moreover, inspecting the bins that include two-decimal reported p-values potentially increases sensitivity if strategic rounding down of p-values as a form of p-hacking is widespread. Given the far-reaching implications of supposed widespread p-hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p-hacking is found in this reanalysis, this does not mean that there is no p-hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p-hacking is ambiguous at best. PMID:28265523

  11. Enormous knowledge base of disease diagnosis criteria.

    PubMed

    Xiao, Z H; Xiao, Y H; Pei, J H

    1995-01-01

    One of the problems in the development of the medical knowledge systems is the limitations of the system's knowledge. It is a common expectation to increase the number of diseases contained in a system. Using a high density knowledge representation method designed by us, we have developed the Enormous Knowledge Base of Disease Diagnosis Criteria (EKBDDC). It contains diagnostic criteria of 1,001 diagnostic entities and describes nearly 4,000 items of diagnostic indicators. It is the core of a huge medical project--the Electronic-Brain Medical Erudite (EBME). This enormous knowledge base was implemented initially on a low-cost popular microcomputer, which can aid in the prompting of typical disease and in teaching of diagnosis. The knowledge base is easy to expand. One of the main goals of EKBDDC is to increase the number of diseases included in it as far as possible using a low-cost computer with a comparatively small storage capacity. For this, we have designed a high density knowledge representation method. Criteria of various diagnostic entities are respectively stored in different records of the knowledge base. Each diagnostic entity corresponds to a diagnostic criterion data set; each data set consists of some diagnostic criterion data values (Table 1); each data is composed of two parts: integer and decimal; the integral part is the coding number of the given diagnostic information, and the decimal part is the diagnostic value of this information to the disease indicated by corresponding record number. For example, 75.02: the integer 75 is the coding number of "hemorrhagic skin rash"; the decimal 0.02 is the diagnostic value of this manifestation for diagnosing allergic purpura. TABULAR DATA, SEE PUBLISHED ABSTRACT. The algebraic sum method, a special form of the weighted summation, is adopted as mathematical model. In EKBDDC, the diagnostic values, which represent the significance of the disease manifestations for diagnosing corresponding diseases, were determined empirically. It is of a great economical, practical, and technical significance to realize enormous knowledge bases of disease diagnosis criteria on a low-cost popular microcomputer. This is beneficial for the developing countries to popularize medical informatics. To create the enormous international computer-aided diagnosis system, one may jointly develop the unified modules of disease diagnosis criteria used to "inlay" relevant computer-aided diagnosis systems. It is just like assembling a house using prefabricated panels.

  12. Flight prototype regenerative particulate filter system development

    NASA Technical Reports Server (NTRS)

    Green, D. C.; Garber, P. J.

    1974-01-01

    The effort to design, fabricate, and test a flight prototype Filter Regeneration Unit used to regenerate (clean) fluid particulate filter elements is reported. The design of the filter regeneration unit and the results of tests performed in both one-gravity and zero-gravity are discussed. The filter regeneration unit uses a backflush/jet impingement method of regenerating fluid filter elements that is highly efficient. A vortex particle separator and particle trap were designed for zero-gravity use, and the zero-gravity test results are discussed. The filter regeneration unit was designed for both inflight maintenance and ground refurbishment use on space shuttle and future space missions.

  13. IIR digital filter design for powerline noise cancellation of ECG signal using arduino platform

    NASA Astrophysics Data System (ADS)

    Rahmatillah, Akif; Ataulkarim

    2017-05-01

    Powerline noise has been one of significant noises of Electrocardiogram (ECG) signal measurement. This noise is characterized by a sinusoidal signal which has 50 Hz of noise and 0.3 mV of maximum amplitude. This paper describes the design of IIR Notch filter design to reject a 50 Hz power line noise. IIR filter coefficients were calculated using pole placement method with three variations of band stop cut off frequencies of (49-51)Hz, (48 - 52)Hz, and (47 - 53)Hz. The algorithm and coefficients of filter were embedded to Arduino DUE (ARM 32 bit microcontroller). IIR notch filter designed has been able to reject power line noise with average square of error value of 0.225 on (49-51) Hz filter design and 0.2831 on (48 - 52)Hz filter design.

  14. Bowtie filters for dedicated breast CT: Analysis of bowtie filter material selection.

    PubMed

    Kontson, Kimberly; Jennings, Robert J

    2015-09-01

    For a given bowtie filter design, both the selection of material and the physical design control the energy fluence, and consequently the dose distribution, in the object. Using three previously described bowtie filter designs, the goal of this work is to demonstrate the effect that different materials have on the bowtie filter performance measures. Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been designed. The nature of the designs allows for their realization using a variety of materials. The designs were based on a phantom, 14 cm in diameter, composed of 40% fibroglandular and 60% adipose tissue. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis-material decomposition to produce the same spectral shape and intensity at the detector, using two different materials. With bowtie design #3, it is possible to eliminate the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. Seven different materials were chosen to represent a range of chemical compositions and densities. After calculation of construction parameters for each bowtie filter design, a bowtie filter was created using each of these materials (assuming reasonable construction parameters were obtained), resulting in a total of 26 bowtie filters modeled analytically and in the penelope Monte Carlo simulation environment. Using the analytical model of each bowtie filter, design profiles were obtained and energy fluence as a function of fan-angle was calculated. Projection images with and without each bowtie filter design were also generated using penelope and reconstructed using FBP. Parameters such as dose distribution, noise uniformity, and scatter were investigated. Analytical calculations with and without each bowtie filter show that some materials for a given design produce bowtie filters that are too large for implementation in breast CT scanners or too small to accurately manufacture. Results also demonstrate the ability to manipulate the energy fluence distribution (dynamic range) by using different materials, or different combinations of materials, for a given bowtie filter design. This feature is especially advantageous when using photon counting detector technology. Monte Carlo simulation results from penelope show that all studied material choices for bowtie design #2 achieve nearly uniform dose distribution, noise uniformity index less than 5%, and nearly uniform scatter-to-primary ratio. These same features can also be obtained using certain materials with bowtie designs #1 and #3. With the three bowtie filter designs used in this work, the selection of material is an important design consideration. An appropriate material choice can improve image quality, dose uniformity, and dynamic range.

  15. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  16. Bowtie filters for dedicated breast CT: Theory and computational implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.

    Purpose: To design bowtie filters with improved properties for dedicated breast CT to improve image quality and reduce dose to the patient. Methods: The authors present three different bowtie filters designed for a cylindrical 14-cm diameter phantom with a uniform composition of 40/60 breast tissue, which vary in their design objectives and performance improvements. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis material decomposition to produce the same spectral shape and intensity at the detector, using two differentmore » materials. Bowtie design #3 eliminates the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. All three designs are obtained using analytical computational methods and linear attenuation coefficients. Thus, the designs do not take into account the effects of scatter. The authors considered this to be a reasonable approach to the filter design problem since the use of Monte Carlo methods would have been computationally intensive. The filter profiles for a cone-angle of 0° were used for the entire length of each filter because the differences between those profiles and the correct cone-beam profiles for the cone angles in our system are very small, and the constant profiles allowed construction of the filters with the facilities available to us. For evaluation of the filters, we used Monte Carlo simulation techniques and the full cone-beam geometry. Images were generated with and without each bowtie filter to analyze the effect on dose distribution, noise uniformity, and contrast-to-noise ratio (CNR) homogeneity. Line profiles through the reconstructed images generated from the simulated projection images were also used as validation for the filter designs. Results: Examples of the three designs are presented. Initial verification of performance of the designs was done using analytical computations of HVL, intensity, and effective attenuation coefficient behind the phantom as a function of fan-angle with a cone-angle of 0°. The performance of the designs depends only weakly on incident spectrum and tissue composition. For all designs, the dynamic range requirement on the detector was reduced compared to the no-bowtie-filter case. Further verification of the filter designs was achieved through analysis of reconstructed images from simulations. Simulation data also showed that the use of our bowtie filters can reduce peripheral dose to the breast by 61% and provide uniform noise and CNR distributions. The bowtie filter design concepts validated in this work were then used to create a computational realization of a 3D anthropomorphic bowtie filter capable of achieving a constant effective attenuation coefficient behind the entire field-of-view of an anthropomorphic breast phantom. Conclusions: Three different bowtie filter designs that vary in performance improvements were described and evaluated using computational and simulation techniques. Results indicate that the designs are robust against variations in breast diameter, breast composition, and tube voltage, and that the use of these filters can reduce patient dose and improve image quality compared to the no-bowtie-filter case.« less

  17. Bowtie filters for dedicated breast CT: Analysis of bowtie filter material selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.

    Purpose: For a given bowtie filter design, both the selection of material and the physical design control the energy fluence, and consequently the dose distribution, in the object. Using three previously described bowtie filter designs, the goal of this work is to demonstrate the effect that different materials have on the bowtie filter performance measures. Methods: Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been designed. The nature of the designs allows for their realization using a variety of materials. The designsmore » were based on a phantom, 14 cm in diameter, composed of 40% fibroglandular and 60% adipose tissue. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis-material decomposition to produce the same spectral shape and intensity at the detector, using two different materials. With bowtie design #3, it is possible to eliminate the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. Seven different materials were chosen to represent a range of chemical compositions and densities. After calculation of construction parameters for each bowtie filter design, a bowtie filter was created using each of these materials (assuming reasonable construction parameters were obtained), resulting in a total of 26 bowtie filters modeled analytically and in the PENELOPE Monte Carlo simulation environment. Using the analytical model of each bowtie filter, design profiles were obtained and energy fluence as a function of fan-angle was calculated. Projection images with and without each bowtie filter design were also generated using PENELOPE and reconstructed using FBP. Parameters such as dose distribution, noise uniformity, and scatter were investigated. Results: Analytical calculations with and without each bowtie filter show that some materials for a given design produce bowtie filters that are too large for implementation in breast CT scanners or too small to accurately manufacture. Results also demonstrate the ability to manipulate the energy fluence distribution (dynamic range) by using different materials, or different combinations of materials, for a given bowtie filter design. This feature is especially advantageous when using photon counting detector technology. Monte Carlo simulation results from PENELOPE show that all studied material choices for bowtie design #2 achieve nearly uniform dose distribution, noise uniformity index less than 5%, and nearly uniform scatter-to-primary ratio. These same features can also be obtained using certain materials with bowtie designs #1 and #3. Conclusions: With the three bowtie filter designs used in this work, the selection of material is an important design consideration. An appropriate material choice can improve image quality, dose uniformity, and dynamic range.« less

  18. Reflective Filters Design for Self-Filtering Narrowband Ultraviolet Imaging Experiment Wide-Field Surveys (NUVIEWS) Project

    NASA Technical Reports Server (NTRS)

    Park, Jung- Ho; Kim, Jongmin; Zukic, Muamer; Torr, Douglas G.

    1994-01-01

    We report the design of multilayer reflective filters for the self-filtering cameras of the NUVIEWS project. Wide angle self-filtering cameras were designed to image the C IV (154.9 nm) line emission, and H2 Lyman band fluorescence (centered at 161 nm) over a 20 deg x 30 deg field of view. A key element of the filter design includes the development of pi-multilayers optimized to provide maximum reflectance at 154.9 nm and 161 nm for the respective cameras without significant spectral sensitivity to the large cone angle of the incident radiation. We applied self-filtering concepts to design NUVIEWS telescope filters that are composed of three reflective mirrors and one folding mirror. The filters with narrowband widths of 6 and 8 rim at 154.9 and 161 nm, respectively, have net throughputs of more than 50 % with average blocking of out-of-band wavelengths better than 3 x 10(exp -4)%.

  19. G.A.M.E.: GPU-accelerated mixture elucidator.

    PubMed

    Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J

    2017-09-15

    GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .

  20. Real-valued composite filters for correlation-based optical pattern recognition

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Balendra, Anushia

    1992-01-01

    Advances in the technology of optical devices such as spatial light modulators (SLMs) have influenced the research and growth of optical pattern recognition. In the research leading to this report, the design of real-valued composite filters that can be implemented using currently available SLMs for optical pattern recognition and classification was investigated. The design of real-valued minimum average correlation energy (RMACE) filter was investigated. Proper selection of the phase of the output response was shown to reduce the correlation energy. The performance of the filter was evaluated using computer simulations and compared with the complex filters. It was found that the performance degraded only slightly. Continuing the above investigation, the design of a real filter that minimizes the output correlation energy and the output variance due to noise was developed. Simulation studies showed that this filter had better tolerance to distortion and noise compared to that of the RMACE filter. Finally, the space domain design of RMACE filter was developed and implemented on the computer. It was found that the sharpness of the correlation peak was slightly reduced but the filter design was more computationally efficient than the complex filter.

  1. Tunable Microwave Filter Design Using Thin-Film Ferroelectric Varactors

    NASA Astrophysics Data System (ADS)

    Haridasan, Vrinda

    Military, space, and consumer-based communication markets alike are moving towards multi-functional, multi-mode, and portable transceiver units. Ferroelectric-based tunable filter designs in RF front-ends are a relatively new area of research that provides a potential solution to support wideband and compact transceiver units. This work presents design methodologies developed to optimize a tunable filter design for system-level integration, and to improve the performance of a ferroelectric-based tunable bandpass filter. An investigative approach to find the origins of high insertion loss exhibited by these filters is also undertaken. A system-aware design guideline and figure of merit for ferroelectric-based tunable band- pass filters is developed. The guideline does not constrain the filter bandwidth as long as it falls within the range of the analog bandwidth of a system's analog to digital converter. A figure of merit (FOM) that optimizes filter design for a specific application is presented. It considers the worst-case filter performance parameters and a tuning sensitivity term that captures the relation between frequency tunability and the underlying material tunability. A non-tunable parasitic fringe capacitance associated with ferroelectric-based planar capacitors is confirmed by simulated and measured results. The fringe capacitance is an appreciable proportion of the tunable capacitance at frequencies of X-band and higher. As ferroelectric-based tunable capac- itors form tunable resonators in the filter design, a proportionally higher fringe capacitance reduces the capacitance tunability which in turn reduces the frequency tunability of the filter. Methods to reduce the fringe capacitance can thus increase frequency tunability or indirectly reduce the filter insertion-loss by trading off the increased tunability achieved to lower loss. A new two-pole tunable filter topology with high frequency tunability (> 30%), steep filter skirts, wide stopband rejection, and constant bandwidth is designed, simulated, fabricated and measured. The filters are fabricated using barium strontium titanate (BST) varactors. Electromagnetic simulations and measured results of the tunable two-pole ferroelectric filter are analyzed to explore the origins of high insertion loss in ferroelectric filters. The results indicate that the high-permittivity of the BST (a ferroelectric) not only makes the filters tunable and compact, but also increases the conductive loss of the ferroelectric-based tunable resonators which translates into high insertion loss in ferroelectric filters.

  2. Design of an S band narrow-band bandpass BAW filter

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhao, Kun-li; Han, Chao

    2017-11-01

    An S band narrowband bandpass filter BAW with center frequency 2.460 GHz, bandwidth 41MHz, band insertion loss - 1.154 dB, the passband ripple 0.9 dB, the out of band rejection about -42.5dB@2.385 GHz; -45.5dB@2.506 GHz was designed for potential UAV measurement and control applications. According to the design specifications, the design is as follows: each FBAR's stack was designed in BAW filter by using Mason model. Each FBAR's shape was designed with the method of apodization electrode. The layout of BAW filter was designed. The acoustic-electromagnetic cosimulation model was built to validate the performance of the designed BAW filter. The presented design procedure is a common one, and there are two characteristics: 1) an A and EM co-simulation method is used for the final BAW filter performance validation in the design stage, thus ensures over-optimistic designs by the bare 1D Mason model are found and rejected in time; 2) An in-house developed auto-layout method is used to get compact BAW filter layout, which simplifies iterative error-and-try work here and output necessary in-plane geometry information to the A and EM cosimulation model.

  3. A Compact High Frequency Doppler Radio Scatterometer for Coastal Oceanography

    NASA Astrophysics Data System (ADS)

    Flament, P. J.; Harris, D.; Flament, M.; Fernandez, I. Q.; Hlivak, R.; Flores-vidal, X.; Marié, L.

    2016-12-01

    A low-power High Frequency Doppler Radar has been designed for large series production. The use of commercial-off-the-shelf components is maximized to minimize overall cost. Power consumption is reduced to 130W in full duty and 20W in stand-by under 20-36 V-DC, thus enabling solar/wind and/or fuel cell operation by default. For 8 channels, commercial components and sub-assemblies cost less than k20 excluding coaxial antenna cables, and less than four man-weeks of technician suffice for integration, testing and calibration, suggesting a final cost of about k36, based on production batches of 25 units. The instrument is integrated into passively-cooled 90x60x20 cm3 field-deployable enclosures, combining signal generation, transmitter, received, A/D converter and computer, alleviating the need for additional protection such as a container or building. It uses frequency-ramped continuous wave signals, and phased-array transmissions to decouple the direct path to the receivers. Five sub-assemblies are controlled by a Linux embedded computer: (i) direct digital synthesis of transmit and orthogonal local oscillator signals, derived from a low phase noise oven-controlled crystal; (ii) distributed power amplifiers totaling 5 W, integrated into λ/8 passive transmit antenna monopoles; (iii) λ/12 compact active receive antenna monopoles with embedded out-of-band rejection filters; (iv) analog receivers based on complex demodulation by double-balanced mixers, translating the HF spectrum to the audio band; (v) 24-bit analog-to-digital sigma-delta conversion at 12 kHz with 512x oversampling, followed by decimation to a final sampling frequency of 750 Hz. Except for the HF interference rejection filters, the electronics can operate between 3 and 50 MHz with no modification. At 13.5 MHz, 5 W transmit power, 15 min integration time, the high signal-to-noise ratio permits a typical range of 120 km for currents measurements with 8-antenna beam-forming. The University of Hawaii HFR has been used since 2013 with 100% reliability, and has been deployed operationally at 7 sites in Hawaii, 4 sites in Baja California, and 1 site in France.

  4. Design of miniature type parallel coupled microstrip hairpin filter in UHF range

    NASA Astrophysics Data System (ADS)

    Hasan, Adib Belhaj; Rahman, Maj Tarikur; Kahhar, Azizul; Trina, Tasnim; Saha, Pran Kanai

    2017-12-01

    A microstrip parallel coupled line bandpass filter is designed in UHF range and the filter size is reduced by microstrip hairpin structure. The FR4 substrate is used as base material of the filter. The filter is analyzed by both ADS and CST design studio in the frequency range of 500 MHz to 650 MHz. The Bandwidth is found 13.27% with a center frequency 570 MHz. Simulation from both ADS and CST shows a very good agreement of performance of the filter.

  5. The intractable cigarette 'filter problem'.

    PubMed

    Harris, Bradford

    2011-05-01

    When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the 'filter problem'. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the 'filter problem'. These reveal a period of intense focus on the 'filter problem' that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette production. The synthetic plastic cellulose acetate became the fundamental cigarette filter material. By the mid-1960s, the meaning of the phrase 'filter problem' changed, such that the effort to develop effective filters became a campaign to market cigarette designs that would sustain the myth of cigarette filter efficacy. This study indicates that cigarette designers at Philip Morris, British-American Tobacco, Lorillard and other companies believed for a time that they might be able to reduce some of the most dangerous substances in mainstream smoke through advanced engineering of filter tips. In their attempts to accomplish this, they developed the now ubiquitous cellulose acetate cigarette filter. By the mid-1960s cigarette designers realised that the intractability of the 'filter problem' derived from a simple fact: that which is harmful in mainstream smoke and that which provides the smoker with 'satisfaction' are essentially one and the same. Only in the wake of this realisation did the agenda of cigarette designers appear to transition away from mitigating the health hazards of smoking and towards the perpetuation of the notion that cigarette filters are effective in reducing these hazards. Filters became a marketing tool, designed to keep and recruit smokers as consumers of these hazardous products.

  6. Miniaturized dielectric waveguide filters

    NASA Astrophysics Data System (ADS)

    Sandhu, Muhammad Y.; Hunter, Ian C.

    2016-10-01

    Design techniques for a new class of integrated monolithic high-permittivity ceramic waveguide filters are presented. These filters enable a size reduction of 50% compared to air-filled transverse electromagnetic filters with the same unloaded Q-factor. Designs for Chebyshev and asymmetric generalised Chebyshev filter and a diplexer are presented with experimental results for an 1800 MHz Chebyshev filter and a 1700 MHz generalised Chebyshev filter showing excellent agreement with theory.

  7. Design Techniques for Uniform-DFT, Linear Phase Filter Banks

    NASA Technical Reports Server (NTRS)

    Sun, Honglin; DeLeon, Phillip

    1999-01-01

    Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.

  8. Design of multiplier-less sharp transition width non-uniform filter banks using gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Bindiya T., S.; Elias, Elizabeth

    2015-01-01

    In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.

  9. Design of coupled mace filters for optical pattern recognition using practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Khan, Ajmal

    1993-01-01

    Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.

  10. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  11. Digital phased array beamforming using single-bit delta-sigma conversion with non-uniform oversampling.

    PubMed

    Kozak, M; Karaman, M

    2001-07-01

    Digital beamforming based on oversampled delta-sigma (delta sigma) analog-to-digital (A/D) conversion can reduce the overall cost, size, and power consumption of phased array front-end processing. The signal resampling involved in dynamic delta sigma beamforming, however, disrupts synchronization between the modulators and demodulator, causing significant degradation in the signal-to-noise ratio. As a solution to this, we have explored a new digital beamforming approach based on non-uniform oversampling delta sigma A/D conversion. Using this approach, the echo signals received by the transducer array are sampled at time instants determined by the beamforming timing and then digitized by single-bit delta sigma A/D conversion prior to the coherent beam summation. The timing information involves a non-uniform sampling scheme employing different clocks at each array channel. The delta sigma coded beamsums obtained by adding the delayed 1-bit coded RF echo signals are then processed through a decimation filter to produce final beamforming outputs. The performance and validity of the proposed beamforming approach are assessed by means of emulations using experimental raw RF data.

  12. Design and Specification of Optical Bandpass Filters for Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS)

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.; Tsevetanov, Zlatan; Woodruff, Bob; Mooney, Thomas A.

    1998-01-01

    Advanced optical bandpass filters for the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) have been developed on a filter-by-filter basis through detailed studies which take into account the instrument's science goals, available optical filter fabrication technology, and developments in ACS's charge-coupled-device (CCD) detector technology. These filters include a subset of filters for the Sloan Digital Sky Survey (SDSS) which are optimized for astronomical photometry using today's charge-coupled-devices (CCD's). In order for ACS to be truly advanced, these filters must push the state-of-the-art in performance in a number of key areas at the same time. Important requirements for these filters include outstanding transmitted wavefront, high transmittance, uniform transmittance across each filter, spectrally structure-free bandpasses, exceptionally high out of band rejection, a high degree of parfocality, and immunity to environmental degradation. These constitute a very stringent set of requirements indeed, especially for filters which are up to 90 mm in diameter. The highly successful paradigm in which final specifications for flight filters were derived through interaction amongst the ACS Science Team, the instrument designer, the lead optical engineer, and the filter designer and vendor is described. Examples of iterative design trade studies carried out in the context of science needs and budgetary and schedule constraints are presented. An overview of the final design specifications for the ACS bandpass and ramp filters is also presented.

  13. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  14. Use of decimal assay for additivity to demonstrate synergy in pair combinations of econazole, nikkomycin Z, and ibuprofen against Candida albicans in vitro.

    PubMed Central

    Tariq, V N; Scott, E M; McCain, N E

    1995-01-01

    Interactions between six compounds (econazole, miconazole, amphotericin B, nystatin, nikkomycin Z, and ibuprofen) were investigated for their antifungal activities against Candida albicans by using pair combinations in an in vitro decimal assay for additivity based on disk diffusion. Additive interactions were observed between miconazole and econazole, amphotericin B and nystatin, and amphotericin B and ibuprofen, while an antagonistic interaction was observed between econazole and amphotericin B. Synergistic interactions were recorded for the combinations of econazole and ibuprofen, econazole and nikkomycin Z, and ibuprofen and nikkomycin Z. PMID:8592989

  15. Counting spanning trees on fractal graphs and their asymptotic complexity

    NASA Astrophysics Data System (ADS)

    Anema, Jason A.; Tsougkas, Konstantinos

    2016-09-01

    Using the method of spectral decimation and a modified version of Kirchhoff's matrix-tree theorem, a closed form solution to the number of spanning trees on approximating graphs to a fully symmetric self-similar structure on a finitely ramified fractal is given in theorem 3.4. We show how spectral decimation implies the existence of the asymptotic complexity constant and obtain some bounds for it. Examples calculated include the Sierpiński gasket, a non-post critically finite analog of the Sierpiński gasket, the Diamond fractal, and the hexagasket. For each example, the asymptotic complexity constant is found.

  16. The filter and calibration wheel for the ATHENA wide field imager

    NASA Astrophysics Data System (ADS)

    Rataj, M.; Polak, S.; Palgan, T.; Kamisiński, T.; Pilch, A.; Eder, J.; Meidinger, N.; Plattner, M.; Barbera, M.; Parodi, G.; D'Anca, Fabio

    2016-07-01

    The planned filter and calibration wheel for the Wide Field Imager (WFI) instrument on Athena is presented. With four selectable positions it provides the necessary functions, in particular an UV/VIS blocking filter for the WFI detectors and a calibration source. Challenges for the filter wheel design are the large volume and mass of the subsystem, the implementation of a robust mechanism and the protection of the ultra-thin filter with an area of 160 mm square. This paper describes performed trade-offs based on simulation results and describes the baseline design in detail. Reliable solutions are envisaged for the conceptual design of the filter and calibration wheel. Four different variant with different position of the filter are presented. Risk mitigation and the compliance to design requirements are demonstrated.

  17. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Kelkar, S. S.

    1982-01-01

    The problems caused by the interaction between the input filter, output filter, and the control loop are discussed. The input filter design is made more complicated because of the need to avoid performance degradation and also stay within the weight and loss limitations. Conventional input filter design techniques are then dicussed. The concept of pole zero cancellation is reviewed; this concept is the basis for an approach to control the peaking of the output impedance of the input filter and thus mitigate some of the problems caused by the input filter. The proposed approach for control of the peaking of the output impedance of the input filter is to use a feedforward loop working in conjunction with feedback loops, thus forming a total state control scheme. The design of the feedforward loop for a buck regulator is described. A possible implementation of the feedforward loop design is suggested.

  18. The intractable cigarette ‘filter problem’

    PubMed Central

    2011-01-01

    Background When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the ‘filter problem’. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. Objective This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Methods Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. Results 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the ‘filter problem’. These reveal a period of intense focus on the ‘filter problem’ that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette production. The synthetic plastic cellulose acetate became the fundamental cigarette filter material. By the mid-1960s, the meaning of the phrase ‘filter problem’ changed, such that the effort to develop effective filters became a campaign to market cigarette designs that would sustain the myth of cigarette filter efficacy. Conclusions This study indicates that cigarette designers at Philip Morris, British-American Tobacco, Lorillard and other companies believed for a time that they might be able to reduce some of the most dangerous substances in mainstream smoke through advanced engineering of filter tips. In their attempts to accomplish this, they developed the now ubiquitous cellulose acetate cigarette filter. By the mid-1960s cigarette designers realised that the intractability of the ‘filter problem’ derived from a simple fact: that which is harmful in mainstream smoke and that which provides the smoker with ‘satisfaction’ are essentially one and the same. Only in the wake of this realisation did the agenda of cigarette designers appear to transition away from mitigating the health hazards of smoking and towards the perpetuation of the notion that cigarette filters are effective in reducing these hazards. Filters became a marketing tool, designed to keep and recruit smokers as consumers of these hazardous products. PMID:21504917

  19. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  20. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  1. Designing an Inverter-based Operational Transconductance Amplifier-capacitor Filter with Low Power Consumption for Biomedical Applications

    PubMed Central

    Yousefinezhad, Sajad; Kermani, Saeed; Hosseinnia, Saeed

    2018-01-01

    The operational transconductance amplifier-capacitor (OTA-C) filter is one of the best structures for implementing continuous-time filters. It is particularly important to design a universal OTA-C filter capable of generating the desired filter response via a single structure, thus reducing the filter circuit power consumption as well as noise and the occupied space on the electronic chip. In this study, an inverter-based universal OTA-C filter with very low power consumption and acceptable noise was designed with applications in bioelectric and biomedical equipment for recording biomedical signals. The very low power consumption of the proposed filter was achieved through introducing bias in subthreshold MOSFET transistors. The proposed filter is also capable of simultaneously receiving favorable low-, band-, and high-pass filter responses. The performance of the proposed filter was simulated and analyzed via HSPICE software (level 49) and 180 nm complementary metal-oxide-semiconductor technology. The rate of power consumption and noise obtained from simulations are 7.1 nW and 10.18 nA, respectively, so this filter has reduced noise as well as power consumption. The proposed universal OTA-C filter was designed based on the minimum number of transconductance blocks and an inverter circuit by three transconductance blocks (OTA). PMID:29535925

  2. Designing an Inverter-based Operational Transconductance Amplifier-capacitor Filter with Low Power Consumption for Biomedical Applications.

    PubMed

    Yousefinezhad, Sajad; Kermani, Saeed; Hosseinnia, Saeed

    2018-01-01

    The operational transconductance amplifier-capacitor (OTA-C) filter is one of the best structures for implementing continuous-time filters. It is particularly important to design a universal OTA-C filter capable of generating the desired filter response via a single structure, thus reducing the filter circuit power consumption as well as noise and the occupied space on the electronic chip. In this study, an inverter-based universal OTA-C filter with very low power consumption and acceptable noise was designed with applications in bioelectric and biomedical equipment for recording biomedical signals. The very low power consumption of the proposed filter was achieved through introducing bias in subthreshold MOSFET transistors. The proposed filter is also capable of simultaneously receiving favorable low-, band-, and high-pass filter responses. The performance of the proposed filter was simulated and analyzed via HSPICE software (level 49) and 180 nm complementary metal-oxide-semiconductor technology. The rate of power consumption and noise obtained from simulations are 7.1 nW and 10.18 nA, respectively, so this filter has reduced noise as well as power consumption. The proposed universal OTA-C filter was designed based on the minimum number of transconductance blocks and an inverter circuit by three transconductance blocks (OTA).

  3. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  4. Contingency designs for attitude determination of TRMM

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Andrews, Stephen F.; Markley, F. Landis; Ha, Kong

    1995-01-01

    In this paper, several attitude estimation designs are developed for the Tropical Rainfall Measurement Mission (TRMM) spacecraft. A contingency attitude determination mode is required in the event of a primary sensor failure. The final design utilizes a full sixth-order Kalman filter. However, due to initial software concerns, the need to investigate simpler designs was required. The algorithms presented in this paper can be utilized in place of a full Kalman filter, and require less computational burden. These algorithms are based on filtered deterministic approaches and simplified Kalman filter approaches. Comparative performances of all designs are shown by simulating the TRMM spacecraft in mission mode. Comparisons of the simulation results indicate that comparable accuracy with respect to a full Kalman filter design is possible.

  5. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  6. Mechanical design and qualification of IR filter mounts and filter wheel of INSAT-3D sounder for low temperature

    NASA Astrophysics Data System (ADS)

    Vora, A. P.; Rami, J. B.; Hait, A. K.; Dewan, C. P.; Subrahmanyam, D.; Kirankumar, A. S.

    2017-11-01

    Next generation Indian Meteorological Satellite will carry Sounder instrument having subsystem of filter wheel measuring Ø260mm and carrying 18 filters arranged in three concentric rings. These filters made from Germanium, are used to separate spectral channels in IR band. Filter wheel is required to be cooled to 214K and rotated at 600 rpm. This Paper discusses the challenges faced in mechanical design of the filter wheel, mainly filter mount design to protect brittle germanium filters from failure under stresses due to very low temperature, compactness of the wheel and casings for improved thermal efficiency, survival under vibration loads and material selection to keep it lighter in weight. Properties of Titanium, Kovar, Invar and Aluminium materials are considered for design. The mount has been designed to accommodate both thermal and dynamic loadings without introducing significant aberrations into the optics or incurring permanent alignment shifts. Detailed finite element analysis of mounts was carried out for stress verification. Results of the qualification tests are discussed for given temperature range of 100K and vibration loads of 12g in Sine and 11.8grms in Random at mount level. Results of the filter wheel qualification as mounted in Electro Optics Module (EOM) are also presented.

  7. Design of efficient circularly symmetric two-dimensional variable digital FIR filters.

    PubMed

    Bindima, Thayyil; Elias, Elizabeth

    2016-05-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.

  8. Design of efficient circularly symmetric two-dimensional variable digital FIR filters

    PubMed Central

    Bindima, Thayyil; Elias, Elizabeth

    2016-01-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739

  9. Optical Correlation of Images With Signal-Dependent Noise Using Constrained-Modulation Filter Devices

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.

  10. Computer-aided design of nano-filter construction using DNA self-assembly

    NASA Astrophysics Data System (ADS)

    Mohammadzadegan, Reza; Mohabatkar, Hassan

    2007-01-01

    Computer-aided design plays a fundamental role in both top-down and bottom-up nano-system fabrication. This paper presents a bottom-up nano-filter patterning process based on DNA self-assembly. In this study we designed a new method to construct fully designed nano-filters with the pores between 5 nm and 9 nm in diameter. Our calculations illustrated that by constructing such a nano-filter we would be able to separate many molecules.

  11. Ares-I Bending Filter Design using a Constrained Optimization Approach

    NASA Technical Reports Server (NTRS)

    Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth

    2008-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.

  12. Labyrinth double split open loop resonator based bandpass filter design for S, C and X-band application

    NASA Astrophysics Data System (ADS)

    Alam, Jubaer; Faruque, Mohammad Rashed Iqbal; Tariqul Islam, Mohammad

    2018-07-01

    Nested circular shaped Labyrinth double split open loop resonators (OLRs) are introduced in this article to design a triple bandpass filter for 3.01 GHz, 7.39 GHz and 12.88 GHz applications. A Rogers RT-5880 is used as a substrate to design the proposed passband filter which has a succinct structure where the attainment of the resonator is explored both integrally and experimentally. The same structure is designed on both sides of the substrate and an analysis is made on the current distribution. Based on the proposed resonator, a bandpass filter is designed and fabricated to justify the perception focusing on 3.01 GHz, 7.39 GHz and 12.88 GHz. It has also been observed by the Nicolson–Ross–Weir approach at the filtering frequencies. The effective electromagnetic parameters retrieved from the simulation of the S-parameters imply that the OLR metamaterial filter shows negative refraction bands. Having an auspicious design and double negative characteristics, this structure is suitable for triple passband filters, particularly for S, C and X-band applications.

  13. Design of 2.5 GHz broad bandwidth microwave bandpass filter at operating frequency of 10 GHz using HFSS

    NASA Astrophysics Data System (ADS)

    Jasim, S. E.; Jusoh, M. A.; Mahmud, S. N. S.; Zamani, A. H.

    2018-04-01

    Development of low losses, small size and broad bandwidth microwave bandpass filter operating at higher frequencies is an active area of research. This paper presents a new route used to design and simulate microwave bandpass filter using finite element modelling and realized broad bandwidth, low losses, small dimension microwave bandpass filter operating at 10 GHz frequency using return loss method. The filter circuit has been carried out using Computer Aid Design (CAD), Ansoft HFSS software and designed with four parallel couple line model and small dimension (10 × 10 mm2) using LaAlO3 substrate. The response of the microwave filter circuit showed high return loss -50 dB at operating frequency at 10.4 GHz and broad bandwidth of 2.5 GHz from 9.5 to 12 GHz. The results indicate the filter design and simulation using HFSS is reliable and have the opportunity to transfer from lab potential experiments to the industry.

  14. Major and EDXRF Trace Element Chemical Analyses of Volcanic Rocks from Lassen Volcanic National Park and Vicinity, California

    USGS Publications Warehouse

    Clynne, Michael A.; Muffler, L.J.P.; Siems, D.F.; Taggart, J.E.; Bruggman, Peggy

    2008-01-01

    This open-file report presents WDXRF major-element chemical data for late Pliocene to Holocene volcanic rocks collected from Lassen Volcanic National Park and vicinity, California. Data for Rb, Sr, Ba, Y, Zr, Nb, Ni, Cr, Zn and Cu obtained by EDXRF are included for many samples. Data are presented in an EXCEL spreadsheet and are keyed to rock units as displayed on the Geologic Map of Lassen Volcanic National Park and vicinity (Clynne and Muffler, in press). Location of the samples is given in latitude and longitude in degrees and decimal minutes and in decimal degrees.

  15. The USAF Stability and Control Digital DATCOM. Volume II. Implementation of Datcom Methods

    DTIC Science & Technology

    1979-04-01

    10N1S PAGE (Wheon 004 Enitletd4 811 UNCLASSIFIED SLkCUMITY CLASSIFICATION Or TAIIS PLQOS(W 1 D#* *,.E) , ---- program capabilities, input and output...J F. )W ..)- vi, w V)4 iI- C,)- co C’,J m m ~ 24 0 cr.’ >44 -i u S-P 0 CC uju w-12. 4.)L LW 3 0- -r DDc o0- C1 oa =ca C CC LA. CDCd LLjJ o 0...is located,, XX is the primary overlay number in decimal , and YY is the secondary overlay number in decimal . Hence, each overlay is written to a disk

  16. Survival of Salmonella typhimurium and Escherichia coli O157:H7 in poultry manure and manure slurry at sublethal temperatures.

    PubMed

    Himathongkham, S; Riemann, H; Bahari, S; Nuanualsuwan, S; Kass, P; Cliver, D O

    2000-01-01

    Exponential inactivation was observed for Salmonella typhimurium and Escherichia coli O157:H7 in poultry manure with decimal reduction times ranging from half a day at 37 C to 1-2 wk at 4 C. There was no material difference in inactivation rates between S. typhimurium and E. coli O157:H7. Inactivation was slower in slurries made by mixing two parts of water with one part of manure; decimal reduction times (time required for 90% destruction) ranged from 1-2 days at 37 C to 6-22 wk at 4 C. Escherichia coli O157:H7 consistently exhibited slightly slower inactivation than S. typhimurium. Log decimal reduction time for both strains was a linear function of storage temperature for manure and slurries. Chemical analysis indicated that accumulation of free ammonia in poultry manure was an important factor in inactivation of the pathogens. This finding was experimentally confirmed for S. typhimurium by adding ammonia directly to peptone water or to bovine manure, which was naturally low in ammonia, and adjusting pH to achieve predetermined levels of free ammonia.

  17. Lumped element filters for electronic warfare systems

    NASA Astrophysics Data System (ADS)

    Morgan, D.; Ragland, R.

    1986-02-01

    Increasing demands which future generations of electronic warfare (EW) systems are to satisfy include a reduction in the size of the equipment. The present paper is concerned with lumped element filters which can make a significant contribution to the downsizing of advanced EW systems. Lumped element filter design makes it possible to obtain very small package sizes by utilizing classical low frequency inductive and capacitive components which are small compared to the size of a wavelength. Cost-effective, temperature-stable devices can be obtained on the basis of new design techniques. Attention is given to aspects of design flexibility, an interdigital filter equivalent circuit diagram, conditions for which the use of lumped element filters can be recommended, construction techniques, a design example, and questions regarding the application of lumped element filters to EW processing systems.

  18. Design considerations for a suboptimal Kalman filter

    NASA Astrophysics Data System (ADS)

    Difilippo, D. J.

    1995-06-01

    In designing a suboptimal Kalman filter, the designer must decide how to simplify the system error model without causing the filter estimation errors to increase to unacceptable levels. Deletion of certain error states and decoupling of error state dynamics are the two principal model simplifications that are commonly used in suboptimal filter design. For the most part, the decisions as to which error states can be deleted or decoupled are based on the designer's understanding of the physics of the particular system. Consequently, the details of a suboptimal design are usually unique to the specific application. In this paper, the process of designing a suboptimal Kalman filter is illustrated for the case of an airborne transfer-of-alignment (TOA) system used for synthetic aperture radar (SAR) motion compensation. In this application, the filter must continuously transfer the alignment of an onboard Doppler-damped master inertial navigation system (INS) to a strapdown navigator that processes information from a less accurate inertial measurement unit (IMU) mounted on the radar antenna. The IMU is used to measure spurious antenna motion during the SAR imaging interval, so that compensating phase corrections can be computed and applied to the radar returns, thereby presenting image degradation that would otherwise result from such motions. The principles of SAR are described in many references, for instance. The primary function of the TOA Kalman filter in a SAR motion compensation system is to control strapdown navigator attitude errors, and to a less degree, velocity and heading errors. Unlike a classical navigation application, absolute positional accuracy is not important. The motion compensation requirements for SAR imaging are discussed in some detail. This TOA application is particularly appropriate as a vehicle for discussing suboptimal filter design, because the system contains features that can be exploited to allow both deletion and decoupling of error states. In Section 2, a high-level background description of a SAR motion compensation system that incorporates a TOA Kalman filter is given. The optimal TOA filter design is presented in Section 3 with some simulation results to indicate potential filter performance. In Section 4, the suboptimal Kalman filter configuration is derived. Simulation results are also shown in this section to allow comparision between suboptimal and optimal filter performances. Conclusions are contained in Section 5.

  19. On the design of recursive digital filters

    NASA Technical Reports Server (NTRS)

    Shenoi, K.; Narasimha, M. J.; Peterson, A. M.

    1976-01-01

    A change of variables is described which transforms the problem of designing a recursive digital filter to that of approximation by a ratio of polynomials on a finite interval. Some analytic techniques for the design of low-pass filters are presented, illustrating the use of the transformation. Also considered are methods for the design of phase equalizers.

  20. Design-Filter Selection for H2 Control of Microgravity Isolation Systems: A Single-Degree-of-Freedom Case Study

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Whorton, Mark S.

    2000-01-01

    Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.

  1. Concentric Split Flow Filter

    NASA Technical Reports Server (NTRS)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  2. Travelers' Health: Water Disinfection for Travelers

    MedlinePlus

    ... hand-pump or gravity-drip filters with various designs and types of filter media are commercially available ... salts, thus achieving desalination. One new portable filter design incorporates hollow fiber technology, which is a cluster ...

  3. Design of Microstrip Bandpass Filters Using SIRs with Even-Mode Harmonics Suppression for Cellular Systems

    NASA Astrophysics Data System (ADS)

    Theerawisitpong, Somboon; Suzuki, Toshitatsu; Morita, Noboru; Utsumi, Yozo

    The design of microstrip bandpass filters using stepped-impedance resonators (SIRs) is examined. The passband center frequency for the WCDMA-FDD (uplink band) Japanese cellular system is 1950MHz with a 60-MHz bandwidth. The SIR physical characteristic can be designed using a SIR characteristic chart based on second harmonic suppression. In our filter design, passband design charts were obtained through the design procedure. Tchebycheff and maximally flat bandpass filters of any bandwidth and any number of steps can be designed using these passband design charts. In addition, sharp skirt characteristics in the passband can be realized by having two transmission zeros at both adjacent frequency bands by using open-ended quarter-wavelength stubs at input and output ports. A new even-mode harmonics suppression technique is proposed to enable a wide rejection band having a high suppression level. The unloaded quality factor of the resonator used in the proposed filters is greater than 240.

  4. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  5. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  6. Selection vector filter framework

    NASA Astrophysics Data System (ADS)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  7. TU-CD-207-10: Dedicated Cone-Beam Breast CT: Design of a 3-D Beam-Shaping Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, S; Shi, L; Karellas, A

    2015-06-15

    Purpose: To design a 3 -D beam-shaping filter for cone-beam breast CT for equalizing x-ray photon fluence incident on the detector along both fan and cone angle directions. Methods: The 3-D beam-shaping filter was designed as the sum of two filters: a bow-tie filter assuming cylindrical breast and a 3D difference filter equivalent to the difference in projected thickness between the cylinder and the real breast. Both filters were designed with breast-equivalent material and converted to Al for the targeted x-ray spectrum. The bow-tie was designed for the largest diameter cylindrical breast by determining the fan-angle dependent path-length and themore » filter thickness needed to equalize the fluence. A total of 23,760 projections (180 projections of 132 binary breast CT volumes) were averaged, scaled for the largest breast, and subtracted from the projection of the largest diameter cylindrical breast to provide the 3D difference filter. The 3 -D beam shaping filter was obtained by summing the two filters. Numerical simulations with semi-ellipsoidal breasts of 10–18 cm diameter (chest-wall to nipple length=0.75 x diameter) were conducted to evaluate beam equalization. Results: The proposed 3-D beam-shaping filter showed a 140% -300% improvement in equalizing the photon fluence along the chest-wall to nipple (cone-angle) direction compared to a bow-tie filter. The improvement over bow-tie filter was larger for breasts with longer chest-wall to nipple length. Along the radial (fan-angle) direction, the performance of the 3-D beam shaping filter was marginally better than the bow-tie filter, with 4%-10% improvement in equalizing the photon fluence. For a ray traversing the chest-wall diameter of the breast, the filter transmission ratio was >0.95. Conclusion: The 3-D beam shaping filter provided substantial advantage over bow-tie filter in equalizing the photon fluence along the cone-angle direction. In conjunction with a 2-axis positioner, the filter can accommodate breasts of varying dimensions and chest-wall inclusion. Supported in part by NIH R01 CA128906 and R21 CA134128. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or NCI.« less

  8. Report of Operation FITZWILLIAM. Volume 2. Nuclear Detection by Airborne Filters

    DTIC Science & Technology

    1948-01-01

    filter device w.a~ designed so as to expoBe a pai,r or filter papers simultaneous:ty. 10ne in each side of th.$ assembly.. The un11;. vaa mounted...m:inut.e at the normal cruising ~:rpeed of’ the air- eril!t. The filter devi1r.:e ~ras designed to operate on the veQ.turi. pri:ncipl,e. 6 Comparison...radioactivity by a spt!c1.al vrap-around Gei ger-MueUer ccunter designed specifically for ~~s purpo~e. Measur ed acti vitjes of filter papers at any one

  9. Fast estimate of Hartley entropy in image sharpening

    NASA Astrophysics Data System (ADS)

    Krbcová, Zuzana; Kukal, Jaromír.; Svihlik, Jan; Fliegel, Karel

    2016-09-01

    Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.

  10. Multi-DSP and FPGA based Multi-channel Direct IF/RF Digital receiver for atmospheric radar

    NASA Astrophysics Data System (ADS)

    Yasodha, Polisetti; Jayaraman, Achuthan; Kamaraj, Pandian; Durga rao, Meka; Thriveni, A.

    2016-07-01

    Modern phased array radars depend highly on digital signal processing (DSP) to extract the echo signal information and to accomplish reliability along with programmability and flexibility. The advent of ASIC technology has made various digital signal processing steps to be realized in one DSP chip, which can be programmed as per the application and can handle high data rates, to be used in the radar receiver to process the received signal. Further, recent days field programmable gate array (FPGA) chips, which can be re-programmed, also present an opportunity to utilize them to process the radar signal. A multi-channel direct IF/RF digital receiver (MCDRx) is developed at NARL, taking the advantage of high speed ADCs and high performance DSP chips/FPGAs, to be used for atmospheric radars working in HF/VHF bands. Multiple channels facilitate the radar t be operated in multi-receiver modes and also to obtain the wind vector with improved time resolution, without switching the antenna beam. MCDRx has six channels, implemented on a custom built digital board, which is realized using six numbers of ADCs for simultaneous processing of the six input signals, Xilinx vertex5 FPGA and Spartan6 FPGA, and two ADSPTS201 DSP chips, each of which performs one phase of processing. MCDRx unit interfaces with the data storage/display computer via two gigabit ethernet (GbE) links. One of the six channels is used for Doppler beam swinging (DBS) mode and the other five channels are used for multi-receiver mode operations, dedicatedly. Each channel has (i) ADC block, to digitize RF/IF signal, (ii) DDC block for digital down conversion of the digitized signal, (iii) decoding block to decode the phase coded signal, and (iv) coherent integration block for integrating the data preserving phase intact. ADC block consists of Analog devices make AD9467 16-bit ADCs, to digitize the input signal at 80 MSPS. The output of ADC is centered around (80 MHz - input frequency). The digitized data is fed to DDC block, which down converts the data to base-band. The DDC block has NCO, mixer and two chains of Bessel filters (fifth order cascaded integration comb filter, two FIR filters, two half band filters and programmable FIR filters) for in-phase (I) and Quadrature phase (Q) channels. The NCO has 32 bits and is set to match the output frequency of ADC. Further, DDC down samples (decimation) the data and reduces the data rate to 16 MSPS. This data is further decimated and the data rate is reduced down to 4/2/1/0.5/0.25/0.125/0.0625 MSPS for baud lengths 0.25/0.5/1/2/4/8/16 μs respectively. The down sampled data is then fed to decoding block, which performs cross correlation to achieve pulse compression of the binary-phase coded data to obtain better range resolution with maximum possible height coverage. This step improves the signal power by a factor equal to the length of the code. Coherent integration block integrates the decoded data coherently for successive pulses, which improves the signal to noise ratio and reduces the data volume. DDC, decoding and coherent integration blocks are implemented in Xilinx vertex5 FPGA. Till this point, function of all six channels is same for DBS mode and multi-receiver modes. Data from vertex5 FPGA is transferred to PC via GbE-1 interface for multi-modes or to two Analog devices make ADSP-TS201 DSP chips (A and B), via link port for DBS mode. ADSP-TS201 chips perform the normalization, DC removal, windowing, FFT computation and spectral averaging on the data, which is transferred to storage/display PC via GbE-2 interface for real-time data display and data storing. Physical layer of GbE interface is implemented in an external chip (Marvel 88E1111) and MAC layer is implemented internal to vertex5 FPGA. The MCDRx has total 4 GB of DDR2 memory for data storage. Spartan6 FPGA is used for generating timing signals, required for basic operation of the radar and testing of the MCDRx.

  11. On the relationship between matched filter theory as applied to gust loads and phased design loads analysis

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Pototzky, Anthony S.

    1989-01-01

    A theoretical basis and example calculations are given that demonstrate the relationship between the Matched Filter Theory approach to the calculation of time-correlated gust loads and Phased Design Load Analysis in common use in the aerospace industry. The relationship depends upon the duality between Matched Filter Theory and Random Process Theory and upon the fact that Random Process Theory is used in Phased Design Loads Analysis in determining an equiprobable loads design ellipse. Extensive background information describing the relevant points of Phased Design Loads Analysis, calculating time-correlated gust loads with Matched Filter Theory, and the duality between Matched Filter Theory and Random Process Theory is given. It is then shown that the time histories of two time-correlated gust load responses, determined using the Matched Filter Theory approach, can be plotted as parametric functions of time and that the resulting plot, when superposed upon the design ellipse corresponding to the two loads, is tangent to the ellipse. The question is raised of whether or not it is possible for a parametric load plot to extend outside the associated design ellipse. If it is possible, then the use of the equiprobable loads design ellipse will not be a conservative design practice in some circumstances.

  12. Optimal design of a bank of spatio-temporal filters for EEG signal classification.

    PubMed

    Higashi, Hiroshi; Tanaka, Toshihisa

    2011-01-01

    The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.

  13. Design and Simulation of Microstrip Hairpin Bandpass Filter with Open Stub and Defected Ground Structure (DGS) at X-Band Frequency

    NASA Astrophysics Data System (ADS)

    Hariyadi, T.; Mulyasari, S.; Mukhidin

    2018-02-01

    In this paper we have designed and simulated a Band Pass Filter (BPF) at X-band frequency. This filter is designed for X-band weather radar application with 9500 MHz center frequency and bandwidth -3 dB is 120 MHz. The filter design was performed using a hairpin microstrip combined with an open stub and defected ground structure (DGS). The substrate used is Rogers RT5880 with a dielectric constant of 2.2 and a thickness of 1.575 mm. Based on the simulation results, it is found that the filter works on frequency 9,44 - 9,56 GHz with insertion loss value at pass band is -1,57 dB.

  14. Optimal design of FIR triplet halfband filter bank and application in image coding.

    PubMed

    Kha, H H; Tuan, H D; Nguyen, T Q

    2011-02-01

    This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.

  15. Parsley: a Command-Line Parser for Astronomical Applications

    NASA Astrophysics Data System (ADS)

    Deich, William

    Parsley is a sophisticated keyword + value parser, packaged as a library of routines that offers an easy method for providing command-line arguments to programs. It makes it easy for the user to enter values, and it makes it easy for the programmer to collect and validate the user's entries. Parsley is tuned for astronomical applications: for example, dates entered in Julian, Modified Julian, calendar, or several other formats are all recognized without special effort by the user or by the programmer; angles can be entered using decimal degrees or dd:mm:ss; time-like intervals as decimal hours, hh:mm:ss, or a variety of other units. Vectors of data are accepted as readily as scalars.

  16. System reliability analysis of granular filter for protection against piping in dams

    NASA Astrophysics Data System (ADS)

    Srivastava, A.; Sivakumar Babu, G. L.

    2015-09-01

    Granular filters are provided for the safety of water retaining structure for protection against piping failure. The phenomenon of piping triggers when the base soil to be protected starts migrating in the direction of seepage flow under the influence of seepage force. To protect base soil from migration, the voids in the filter media should be small enough but it should not also be too small to block smooth passage of seeping water. Fulfilling these two contradictory design requirements at the same time is a major concern for the successful performance of granular filter media. Since Terzaghi era, conventionally, particle size distribution (PSD) of granular filters is designed based on particle size distribution characteristics of the base soil to be protected. The design approach provides a range of D15f value in which the PSD of granular filter media should fall and there exist infinite possibilities. Further, safety against the two critical design requirements cannot be ensured. Although used successfully for many decades, the existing filter design guidelines are purely empirical in nature accompanied with experience and good engineering judgment. In the present study, analytical solutions for obtaining the factor of safety with respect to base soil particle migration and soil permeability consideration as proposed by the authors are first discussed. The solution takes into consideration the basic geotechnical properties of base soil and filter media as well as existing hydraulic conditions and provides a comprehensive solution to the granular filter design with ability to assess the stability in terms of factor of safety. Considering the fact that geotechnical properties are variable in nature, probabilistic analysis is further suggested to evaluate the system reliability of the filter media that may help in risk assessment and risk management for decision making.

  17. Inverse design of high-Q wave filters in two-dimensional phononic crystals by topology optimization.

    PubMed

    Dong, Hao-Wen; Wang, Yue-Sheng; Zhang, Chuanzeng

    2017-04-01

    Topology optimization of a waveguide-cavity structure in phononic crystals for designing narrow band filters under the given operating frequencies is presented in this paper. We show that it is possible to obtain an ultra-high-Q filter by only optimizing the cavity topology without introducing any other coupling medium. The optimized cavity with highly symmetric resonance can be utilized as the multi-channel filter, raising filter and T-splitter. In addition, most optimized high-Q filters have the Fano resonances near the resonant frequencies. Furthermore, our filter optimization based on the waveguide and cavity, and our simple illustration of a computational approach to wave control in phononic crystals can be extended and applied to design other acoustic devices or even opto-mechanical devices. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A design aid for determining width of filter strips

    Treesearch

    M.G. Dosskey; M.J. Helmers; D.E. Eisenhauer

    2008-01-01

    watershed planners need a tool for determining width of filter strips that is accurate enough for developing cost-effective site designs and easy enough to use for making quick determinations on a large number and variety of sites.This study employed the process-based Vegetative Filter Strip Model to evaluate the relationship between filter strip width and trapping...

  19. Design and Analysis of a Micromachined LC Low Pass Filter For 2.4GHz Application

    NASA Astrophysics Data System (ADS)

    Saroj, Samruddhi R.; Rathee, Vishal R.; Pande, Rajesh S.

    2018-02-01

    This paper reports design and analysis of a passive low pass filter with cut-off frequency of 2.4 GHz using MEMS (Micro Electro-Mechanical Systems) technology. The passive components such as suspended spiral inductors and metal-insulator-metal (MIM) capacitor are arranged in T network form to implement LC low pass filter design. This design employs a simple approach of suspension thereby reducing parasitic losses to eliminate the performance degrading effects caused by integrating an off-chip inductor in the filter circuit proposed to be developed on a low cost silicon substrate using RF-MEMS components. The filter occupies only 2.1 mm x 0.66 mm die area and is designed using micro-strip transmission line placed on a silicon substrate. The design is implemented in High Frequency Structural Simulator (HFSS) software and fabrication flow is proposed for its implementation. The simulated results show that the design has an insertion loss of -4.98 dB and return loss of -2.60dB.

  20. Thermal control design of the Lightning Mapper Sensor narrow-band spectral filter

    NASA Technical Reports Server (NTRS)

    Flannery, Martin R.; Potter, John; Raab, Jeff R.; Manlief, Scott K.

    1992-01-01

    The performance of the Lightning Mapper Sensor is dependent on the temperature shifts of its narrowband spectral filter. To perform over a 10 degree FOV with an 0.8 nm bandwidth, the filter must be 15 cm in diameter and mounted externally to the telescope optics. The filter thermal control required a filter design optimized for minimum bandpass shift with temperature, a thermal analysis of substrate materials for maximum temperature uniformity, and a thermal radiation analysis to determine the parameter sensitivity of the radiation shield for the filter, the filter thermal recovery time after occultation, and heater power to maintain filter performance in the earth-staring geosynchronous environment.

  1. The Lockheed alternate partial polarizer universal filter

    NASA Technical Reports Server (NTRS)

    Title, A. M.

    1976-01-01

    A tunable birefringent filter using an alternate partial polarizer design has been built. The filter has a transmission of 38% in polarized light. Its full width at half maximum is .09A at 5500A. It is tunable from 4500 to 8500A by means of stepping motor actuated rotating half wave plates and polarizers. Wave length commands and thermal compensation commands are generated by a PPD 11/10 minicomputer. The alternate partial polarizer universal filter is compared with the universal birefringent filter and the design techniques, construction methods, and filter performance are discussed in some detail. Based on the experience of this filter some conclusions regarding the future of birefringent filters are elaborated.

  2. Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss

    PubMed Central

    Alexander, Joshua M.; Kopun, Judy G.; Stelmachowicz, Patricia G.

    2014-01-01

    Summary: Listeners with normal hearing and mild to moderate loss identified fricatives and affricates that were recorded through hearing aids with frequency transposition (FT) or nonlinear frequency compression (NFC). FT significantly degraded performance for both groups. When frequencies up to ~9 kHz were lowered with NFC and with a novel frequency compression algorithm, spectral envelope decimation, performance significantly improved relative to conventional amplification (NFC-off) and was equivalent to wideband speech. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for /s/ and /z/. Objectives: Stelmachowicz and colleagues have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild-to-moderate sensorineural hearing loss and in normal-hearing controls. Design: Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant (VC) context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6-7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of the source stimuli. For the Inteo, the fourth condition consisted of recordings made with the same settings as the first, but with the noise reduction feature activated (FT-off). For the Naída, the fourth condition was the same as the first condition except that source stimuli were pre-processed by a novel frequency compression algorithm, spectral envelope decimation (SED), designed in MATLAB that allowed for a more complete lowering of the 4-10 kHz input band. A follow up experiment with NFC used Phonak’s Naída SP V BTE, which could also lower a greater range of input frequencies. Results: For normal-hearing (NH) and hearing-impaired (HI) listeners, performance with FT was significantly worse compared to the other conditions. Consistent with previous findings, performance for the HI listeners in the WB condition was significantly better than in the FT-off condition. In addition, performance in the SED and WB conditions were both significantly better than the NFC-off condition and the NFC condition with 6 kHz input bandwidth. There were no significant differences between SED and WB, indicating that improvements in fricative identification obtained by increasing bandwidth can also be obtained using this form of frequency compression. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for the phonemes /s/ and /z/. In the follow up experiment, performance in the NFC condition with 10 kHz input bandwidth was significantly better than NFC-off, replicating the results obtained with SED. Furthermore, listeners who performed poorly with NFC-off tended to show the most improvement with NFC. Conclusions: Improvements in the identification of stimuli chosen to be sensitive to the effects of frequency lowering have been demonstrated using two forms of frequency compression (NFC and SED) in individuals with mild to moderate high-frequency SNHL. However, negative results caution against using FT for this population. Results also indicate that the advantage of an extended bandwidth as reported here and elsewhere applies to the input bandwidth for frequency compression (NFC/SED) when the start frequency is ≥ 4 kHz. PMID:24699702

  3. Improvement of sand filter and constructed wetland design using an environmental decision support system.

    PubMed

    Turon, Clàudia; Comas, Joaquim; Torrens, Antonina; Molle, Pascal; Poch, Manel

    2008-01-01

    With the aim of improving effluent quality of waste stabilization ponds, different designs of vertical flow constructed wetlands and intermittent sand filters were tested on an experimental full-scale plant within the framework of a European project. The information extracted from this study was completed and updated with heuristic and bibliographic knowledge. The data and knowledge acquired were difficult to integrate into mathematical models because they involve qualitative information and expert reasoning. Therefore, it was decided to develop an environmental decision support system (EDSS-Filter-Design) as a tool to integrate mathematical models and knowledge-based techniques. This paper describes the development of this support tool, emphasizing the collection of data and knowledge and representation of this information by means of mathematical equations and a rule-based system. The developed support tool provides the main design characteristics of filters: (i) required surface, (ii) media type, and (iii) media depth. These design recommendations are based on wastewater characteristics, applied load, and required treatment level data provided by the user. The results of the EDSS-Filter-Design provide appropriate and useful information and guidelines on how to design filters, according to the expert criteria. The encapsulation of the information into a decision support system reduces the design period and provides a feasible, reasoned, and positively evaluated proposal.

  4. A systematic investigation of the link between rational number processing and algebra ability.

    PubMed

    Hurst, Michelle; Cordes, Sara

    2018-02-01

    Recent research suggests that fraction understanding is predictive of algebra ability; however, the relative contributions of various aspects of rational number knowledge are unclear. Furthermore, whether this relationship is notation-dependent or rather relies upon a general understanding of rational numbers (independent of notation) is an open question. In this study, college students completed a rational number magnitude task, procedural arithmetic tasks in fraction and decimal notation, and an algebra assessment. Using these tasks, we measured three different aspects of rational number ability in both fraction and decimal notation: (1) acuity of underlying magnitude representations, (2) fluency with which symbols are mapped to the underlying magnitudes, and (3) fluency with arithmetic procedures. Analyses reveal that when looking at the measures of magnitude understanding, the relationship between adults' rational number magnitude performance and algebra ability is dependent upon notation. However, once performance on arithmetic measures is included in the relationship, individual measures of magnitude understanding are no longer unique predictors of algebra performance. Furthermore, when including all measures simultaneously, results revealed that arithmetic fluency in both fraction and decimal notation each uniquely predicted algebra ability. Findings are the first to demonstrate a relationship between rational number understanding and algebra ability in adults while providing a clearer picture of the nature of this relationship. © 2017 The British Psychological Society.

  5. Calibration sources and filters of the soft x-ray spectrometer instrument on the Hitomi spacecraft

    NASA Astrophysics Data System (ADS)

    de Vries, Cor P.; Haas, Daniel; Yamasaki, Noriko Y.; Herder, Jan-Willem den; Paltani, Stephane; Kilbourne, Caroline; Tsujimoto, Masahiro; Eckart, Megan E.; Leutenegger, Maurice A.; Costantini, Elisa; Dercksen, Johannes P. C.; Dubbeldam, Luc; Frericks, Martin; Laubert, Phillip P.; van Loon, Sander; Lowes, Paul; McCalden, Alec J.; Porter, Frederick S.; Ruijter, Jos; Wolfs, Rob

    2018-01-01

    The soft x-ray spectrometer was designed to operate onboard the Japanese Hitomi (ASTRO-H) satellite. In the beam of this instrument, there was a filter wheel containing x-ray filters and active calibration sources. This paper describes this filter wheel. We show the purpose of the filters and the preflight calibrations performed. In addition, we present the calibration source design and measured performance. Finally, we conclude with prospects for future missions.

  6. Landsat real-time processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, E.L.

    A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less

  7. Boundary implications for frequency response of interval FIR and IIR filters

    NASA Technical Reports Server (NTRS)

    Bose, N. K.; Kim, K. D.

    1991-01-01

    It is shown that vertex implication results in parameter space apply to interval trigonometric polynomials. Subsequently, it is shown that the frequency responses of both interval FIR and IIR filters are bounded by the frequency responses of certain extreme filters. The results apply directly in the evaluation of properties of designed filters, especially because it is more realistic to bound the filter coefficients from above and below instead of determining those with infinite precision because of finite arithmetic effects. Illustrative examples are provided to show how the extreme filters might be easily derived in any specific interval FIR or IIR filter design problem.

  8. A Critical Review of Available Retrievable Inferior Vena Cava Filters and Future Directions

    PubMed Central

    Montgomery, Jennifer P.; Kaufman, John A.

    2016-01-01

    Inferior vena cava filters have been placed in patients for decades for protection against pulmonary embolism. The widespread use of filters has dramatically increased owing at least in part to the approval of retrievable vena cava filters. Retrievable filters have the potential to protect against pulmonary embolism and then be retrieved once no longer needed to avoid potential long-term complications. There are several retrievable vena cava filters available for use. This article discusses the different filter designs as well as the published data on these available filters. When selecting a filter for use, it is important to consider the potential short-term complications and the filters' window for retrieval. Understanding potential long-term complications is also critical, as these devices are approved for permanent placement and many filters are not retrieved. Finally, this article will address research into new designs that may be the future of vena cava filtration. PMID:27247475

  9. A Critical Review of Available Retrievable Inferior Vena Cava Filters and Future Directions.

    PubMed

    Montgomery, Jennifer P; Kaufman, John A

    2016-06-01

    Inferior vena cava filters have been placed in patients for decades for protection against pulmonary embolism. The widespread use of filters has dramatically increased owing at least in part to the approval of retrievable vena cava filters. Retrievable filters have the potential to protect against pulmonary embolism and then be retrieved once no longer needed to avoid potential long-term complications. There are several retrievable vena cava filters available for use. This article discusses the different filter designs as well as the published data on these available filters. When selecting a filter for use, it is important to consider the potential short-term complications and the filters' window for retrieval. Understanding potential long-term complications is also critical, as these devices are approved for permanent placement and many filters are not retrieved. Finally, this article will address research into new designs that may be the future of vena cava filtration.

  10. Hard X-ray Wiggler Front End Filter Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulte-Schrepping, Horst; Hahn, Ulrich

    2007-01-19

    The front end filter design and implementation for the new HARWI-II hard X-ray wiggler at DORIS-III at HASYLAB/DESY is presented. The device emits a total power of 30 kW at 150mA storage ring current. The beam has a horizontal width of 3.8mrad and a central power density of 54 W/mm2 at 26m distance to the source. The filter section located in the ring tunnel has been introduced to tailor the thermal loads at the downstream optical components. The high power density and the high total power at the filter section are handled with a layered design. Glassy carbon filters convertmore » the absorbed power into thermal radiation to lower the heat load to an acceptable level for water cooled copper filters. The requirements in beam size and filtering are addressed by separating the filter functions in three units which are switched individually into the beam.« less

  11. Media filter drain : modified design evaluation and existing design longevity evaluation.

    DOT National Transportation Integrated Search

    2014-02-01

    The media filter drain (MFD), a stormwater water quality treatment best management practice, consists of media made up of : aggregate, perlite, gypsum and dolomite in a trench located along roadway shoulders with gravel and vegetative pre-filtering :...

  12. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  13. Design and implementation of a multiband digital filter using FPGA to extract the ECG signal in the presence of different interference signals.

    PubMed

    Aboutabikh, Kamal; Aboukerdah, Nader

    2015-07-01

    In this paper, we propose a practical way to synthesize and filter an ECG signal in the presence of four types of interference signals: (1) those arising from power networks with a fundamental frequency of 50Hz, (2) those arising from respiration, having a frequency range from 0.05 to 0.5Hz, (3) muscle signals with a frequency of 25Hz, and (4) white noise present within the ECG signal band. This was done by implementing a multiband digital filter (seven bands) of type FIR Multiband Least Squares using a digital programmable device (Cyclone II EP2C70F896C6 FPGA, Altera), which was placed on an education and development board (DE2-70, Terasic). This filter was designed using the VHDL language in the Quartus II 9.1 design environment. The proposed method depends on Direct Digital Frequency Synthesizers (DDFS) designed to synthesize the ECG signal and various interference signals. So that the synthetic ECG specifications would be closer to actual ECG signals after filtering, we designed in a single multiband digital filter instead of using three separate digital filters LPF, HPF, BSF. Thus all interference signals were removed with a single digital filter. The multiband digital filter results were studied using a digital oscilloscope to characterize input and output signals in the presence of differing sinusoidal interference signals and white noise. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  15. UWB Bandpass Filter with Ultra-wide Stopband based on Ring Resonator

    NASA Astrophysics Data System (ADS)

    Kazemi, Maryam; Lotfi, Saeedeh; Siahkamari, Hesam; Mohammadpanah, Mahmood

    2018-04-01

    An ultra-wideband (UWB) bandpass filter with ultra-wide stopband based on a rectangular ring resonator is presented. The filter is designed for the operational frequency band from 4.10 GHz to 10.80 GHz with an ultra-wide stopband from 11.23 GHz to 40 GHz. The even and odd equivalent circuits are used to achieve a suitable analysis of the proposed filter performance. To verify the design and analysis, the proposed bandpass filter is simulated using full-wave EM simulator Advanced Design System and fabricated on a 20mil thick Rogers_RO4003 substrate with relative permittivity of 3.38 and a loss tangent of 0.0021. The proposed filter behavior is investigated and simulation results are in good agreement with measurement results.

  16. A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases

    NASA Astrophysics Data System (ADS)

    Shi, Hongli; Luo, Shuqian

    2010-12-01

    In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.

  17. Monitoring lightning from space with TARANIS

    NASA Astrophysics Data System (ADS)

    Farges, T.; Blanc, E.; Pinçon, J.

    2010-12-01

    Some recent space experiments, e.g. OTD, LIS, show the large interest of lightning monitoring from space and the efficiency of optical measurement. Future instrumentations are now defined for the next generation of geostationary meteorology satellites. Calibration of these instruments requires ground truth events provided by lightning location networks, as NLDN in US, and EUCLID or LINET in Europe, using electromagnetic observations at a regional scale. One of the most challenging objectives is the continuous monitoring of the lightning activity over the tropical zone (Africa, America, and Indonesia). However, one difficulty is the lack of lightning location networks at regional scale in these areas to validate the data quality. TARANIS (Tool for the Analysis of Radiations from lightNings and Sprites) is a CNES micro satellite project. It is dedicated to the study of impulsive transfers of energy, between the Earth atmosphere and the space environment, from nadir observations of Transient Luminous Events (TLEs), Terrestrial Gamma ray Flashes (TGFs) and other possible associated emissions. Its orbit will be sun-synchronous at 10:30 local time; its altitude will be 700 km. Its lifetime will be nominally 2 years. Its payload is composed of several electromagnetic instruments in different wavelengths: X and gamma-ray detectors, optical cameras and photometers, electromagnetic wave sensors from DC to 30 MHz completed by high energy electron detectors. The optical instrument includes 2 cameras and 4 photometers. All sensors are equipped with filters for sprite and lightning differentiation. The filters of cameras are designed for sprite and lightning observations at 762 nm and 777 nm respectively. However, differently from OTD or LIS instruments, the filter bandwidth and the exposure time (respectively 10 nm and 91 ms) prevent lightning optical observations during daytime. The camera field of view is a square of 500 km at ground level with a spatial sampling frequency of about 1 km. One of the photometers will measure precisely the lightning radiance in a wide spectral range from 600 to 900 nm with a sampling frequency of 20 kHz. We suggest using the Event and mainly Survey mode of MCP instrument to monitor lightning activity and compare it to the geostationary satellite lightning mapper data. In the Event mode, data are recorded with their highest resolution. In the camera survey mode, every image is archived using a specific compression algorithm. The photometer Survey mode consists in decimating the data by a factor of 10 and to reduce the data dynamic. However, it remains well adapted to provide a good continuous characterization of lightning activity. The use of other instruments for example 0+ whistler detector will complete the lightning characterization.

  18. Systematic Biological Filter Design with a Desired I/O Filtering Response Based on Promoter-RBS Libraries.

    PubMed

    Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen

    2015-01-01

    In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response.

  19. Precision of a CAD/CAM technique for the production of zirconium dioxide copings.

    PubMed

    Coli, Pierluigi; Karlsson, Stig

    2004-01-01

    The precision of a computer-aided design/manufacturing (CAD/CAM) system to manufacture zirconium dioxide copings with a predetermined internal space was investigated. Two master models were produced in acrylic resin. One was directly scanned by the Decim Reader. The Decim Producer then manufactured 10 copings from prefabricated zirconium dioxide blocks. Five copings were prepared, aiming for an internal space to the master of 45 microm. The other five copings were prepared for an internal space of 90 microm. The second test model was used to try in the copings produced. The obtained internal space of the ceramic copings was evaluated by separate measurements of the master models and inner surfaces of the copings. The master models were measured at predetermined points with an optical instrument. The zirconium dioxide copings were measured with a contact instrument at the corresponding sites measured in the masters. The first group of copings had a mean internal space to the scanned master of 41 microm and of 53 microm to the try-in master. In general, the internal space along the axial walls of the masters was smaller than that along the occlusal walls. The second group had a mean internal space of 82 microm to the scanned master and of 90 microm to the try-in master. The aimed-for internal space of the copings was achieved by the manufacturer. The CAD/CAM technique tested provided high precision in the manufacture of zirconium dioxide copings.

  20. Chirped-cavity dispersion-compensation filter design.

    PubMed

    Li, Ya-Ping; Chen, Sheng-Hui; Lee, Cheng-Chung

    2006-03-01

    A new basic structure of a dispersive-compensation filter, called a chirped-cavity dispersion-compensator (CCDC) filter, was designed to offer the advantages of small ripples in both reflectance and group-delay dispersion (GDD). This filter provides a high dispersion compensation, like the Gires-Tournois interferometer (GTI) filter, and a wide working bandwidth, like the chirped mirror (CM). The structure of the CCDC is a cavity-type Fabry-Perot filter with a spacer layer (2 mH or 2 mL) and a chirped high reflector. The CCDC filter can provide a negative GDD of -50 fs2 over a bandwidth of 56 THz with half the optical thickness of the CM or the GTI.

  1. Chirped-cavity dispersion-compensation filter design

    NASA Astrophysics Data System (ADS)

    Li, Ya-Ping; Chen, Sheng-Hui; Lee, Cheng-Chung

    2006-03-01

    A new basic structure of a dispersive-compensation filter, called a chirped-cavity dispersion-compensator (CCDC) filter, was designed to offer the advantages of small ripples in both reflectance and group-delay dispersion (GDD). This filter provides a high dispersion compensation, like the Gires-Tournois interferometer (GTI) filter, and a wide working bandwidth, like the chirped mirror (CM). The structure of the CCDC is a cavity-type Fabry-Perot filter with a spacer layer (2 mH or 2 mL) and a chirped high reflector. The CCDC filter can provide a negative GDD of -50 fs2 over a bandwidth of 56 THz with half the optical thickness of the CM or the GTI.

  2. Data extraction for complex meta-analysis (DECiMAL) guide.

    PubMed

    Pedder, Hugo; Sarri, Grammati; Keeney, Edna; Nunes, Vanessa; Dias, Sofia

    2016-12-13

    As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. Network meta-analysis (NMA), multiple outcomes analysis and analysis combining different types of data are considered in a manner that can be useful across a range of data collection programmes. The guide has been shown to be both easy to learn and useful in a small pilot study.

  3. Designer Infrared Filters using Stacked Metal Lattices

    NASA Technical Reports Server (NTRS)

    Smith, Howard A.; Rebbert, M.; Sternberg, O.

    2003-01-01

    We have designed and fabricated infrared filters for use at wavelengths greater than or equal to 15 microns. Unlike conventional dielectric filters used at the short wavelengths, ours are made from stacked metal grids, spaced at a very small fraction of the performance wavelengths. The individual lattice layers are gold, the spacers are polyimide, and they are assembled using integrated circuit processing techniques; they resemble some metallic photonic band-gap structures. We simulate the filter performance accurately, including the coupling of the propagating, near-field electromagnetic modes, using computer aided design codes. We find no anomalous absorption. The geometrical parameters of the grids are easily altered in practice, allowing for the production of tuned filters with predictable useful transmission characteristics. Although developed for astronomical instrumentation, the filters arc broadly applicable in systems across infrared and terahertz bands.

  4. Nitinol Embolic Protection Filters: Design Investigation by Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Conti, Michele; de Beule, Matthieu; Mortier, Peter; van Loo, Denis; Verdonck, Pascal; Vermassen, Frank; Segers, Patrick; Auricchio, Ferdinando; Verhegghe, Benedict

    2009-08-01

    The widespread acceptance of carotid artery stenting (CAS) to treat carotid artery stenosis and its effectiveness compared with surgical counterpart, carotid endarterectomy (CEA), is still a matter of debate. Transient or permanent neurological deficits may develop in patients undergoing CAS due to distal embolization or hemodynamic changes. Design, development, and usage of embolic protection devices (EPDs), such as embolic protection filters, appear to have a significant impact on the success of CAS. Unfortunately, some drawbacks, such as filtering failure, inability to cross tortuous high-grade stenoses, malpositioning and vessel injury, still remain and require design improvement. Currently, many different designs of such devices are available on the rapidly growing dedicated market. In spite of such a growing commercial interest, there is a significant need for design tools as well as for careful engineering investigations and design analyses of such nitinol devices. The present study aims to investigate the embolic protection filter design by finite element analysis. We first developed a parametrical computer-aided design model of an embolic filter based on micro-CT scans of the Angioguard™ XP (Cordis Endovascular, FL) EPD by means of the open source pyFormex software. Subsequently, we used the finite element method to simulate the deployment of the nitinol filter as it exits the delivery sheath. Comparison of the simulations with micro-CT images of the real device exiting the catheter showed excellent correspondence with our simulations. Finally, we evaluated circumferential basket-vessel wall apposition of a 4 mm size filter in a straight vessel of different sizes and shape. We conclude that the proposed methodology offers a useful tool to evaluate and to compare current or new designs of EPDs. Further simulations will investigate vessel wall apposition in a realistic tortuous anatomy.

  5. Radar range data signal enhancement tracker

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design, fabrication, and performance characteristics are described of two digital data signal enhancement filters which are capable of being inserted between the Space Shuttle Navigation Sensor outputs and the guidance computer. Commonality of interfaces has been stressed so that the filters may be evaluated through operation with simulated sensors or with actual prototype sensor hardware. The filters will provide both a smoothed range and range rate output. Different conceptual approaches are utilized for each filter. The first filter is based on a combination low pass nonrecursive filter and a cascaded simple average smoother for range and range rate, respectively. Filter number two is a tracking filter which is capable of following transient data of the type encountered during burn periods. A test simulator was also designed which generates typical shuttle navigation sensor data.

  6. Design of thin-film filters for resolution improvements in filter-array based spectrometers using DSP

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Kim, Cheolsun; Ju, Gun Wu; Lee, Yong Tak; Lee, Heung-No

    2016-05-01

    Miniature spectrometers have been widely developed in various academic and industrial applications such as bio-medical, chemical and environmental engineering. As a family of spectrometers, optical filter-array based spectrometers fabricated using CMOS or Nano technology provide miniaturization, superior portability and cost effectiveness. In filterarray based spectrometers, the resolution which represents the ability how closely resolve two neighboring spectra, depends on the number of filters and the characteristics of the transmission functions (TFs) of the filters. In practice, due to the small-size and low-cost fabrication, the number of filters is limited and the shape of the TF of each filter is nonideal. As a development of modern digital signal processing (DSP), the spectrometers are equipped with DSP algorithms not only to alleviate distortions due to unexpected noise or interferences among filters but also reconstruct the original signal spectrum. For a high-resolution spectrum reconstruction by the DSP, the TFs of the filters need to be sufficiently uncorrelated with each other. In this paper, we present a design of optical thin-film filters which have the uncorrelated TFs. Each filter consists of multiple layers of high- and low-refractive index materials deposited on a substrate. The proposed design helps the DSP algorithm to improve resolution with a small number of filters. We demonstrate that a resolution of 5 nm within a range from 500 nm to 1100 nm can be achieved with only 64 filters.

  7. Wideband dichroic-filter design for LED-phosphor beam-combining

    DOEpatents

    Falicoff, Waqidi

    2010-12-28

    A general method is disclosed of designing two-component dichroic short-pass filters operable for incidence angle distributions over the 0-30.degree. range, and specific preferred embodiments are listed. The method is based on computer optimization algorithms for an N-layer design, specifically the N-dimensional conjugate-gradient minimization of a merit function based on difference from a target transmission spectrum, as well as subsequent cycles of needle synthesis for increasing N. A key feature of the method is the initial filter design, upon which the algorithm proceeds to iterate successive design candidates with smaller merit functions. This initial design, with high-index material H and low-index L, is (0.75 H, 0.5 L, 0.75 H)^m, denoting m (20-30) repetitions of a three-layer motif, giving rise to a filter with N=2 m+1.

  8. Distributed digital signal processors for multi-body structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K.

    1990-01-01

    Several digital filter designs were investigated which may be used to process sensor data from large space structures and to design digital hardware to implement the distributed signal processing architecture. Several experimental tests articles are available at NASA Langley Research Center to evaluate these designs. A summary of some of the digital filter designs is presented, an evaluation of their characteristics relative to control design is discussed, and candidate hardware microcontroller/microcomputer components are given. Future activities include software evaluation of the digital filter designs and actual hardware inplementation of some of the signal processor algorithms on an experimental testbed at NASA Langley.

  9. Symmetric/Asymmetrical SIRs Dual-Band BPF Design for WLAN Applications

    NASA Astrophysics Data System (ADS)

    Ho, Min-Hua; Ho, Hao-Hung; Chen, Mingchih

    This paper presents the dual-band bandpass filters (BPFs) design composed of λ/2 and symmetrically/asymmetrically paired λ/4 stepped impedance resonators (SIRs) for the WLAN applications. The filters cover both the operating frequencies of 2.45 and 5.2GHz. The dual-coupling mechanism is used in the filter design to provide alternative routes for signals of selected frequencies. A prototype filter is composed of λ/2 and symmetrical λ/4 SIRs. The enhanced wide-stopband filter is then developed from the filter with the symmetrical λ/4 SIRs replaced by the asymmetrical ones. The asymmetrical λ/4 SIRs have their higher resonances frequencies isolated from the adjacent I/O SIRs and extend the enhanced filter an upper stopband limit beyond ten time the fundamental frequency. Also, the filter might possess a cross-coupling structure which introduces transmission zeros by the passband edges to improve the signal selectivity. The tapped-line feed is adopted in this circuit to create additional attenuation poles for improving the stopband rejection levels. Experiments are conducted to verify the circuit performance.

  10. Ordered fast Fourier transforms on a massively parallel hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Tong, Charles; Swarztrauber, Paul N.

    1991-01-01

    The present evaluation of alternative, massively parallel hypercube processor-applicable designs for ordered radix-2 decimation-in-frequency FFT algorithms gives attention to the reduction of computation time-dominating communication. A combination of the order and computational phases of the FFT is accordingly employed, in conjunction with sequence-to-processor maps which reduce communication. Two orderings, 'standard' and 'cyclic', in which the order of the transform is the same as that of the input sequence, can be implemented with ease on the Connection Machine (where orderings are determined by geometries and priorities. A parallel method for trigonometric coefficient computation is presented which does not employ trigonometric functions or interprocessor communication.

  11. Impedance Matched Absorptive Thermal Blocking Filters

    NASA Technical Reports Server (NTRS)

    Wollack, E. J.; Chuss, D. T.; U-Yen, K.; Rostem, K.

    2014-01-01

    We have designed, fabricated and characterized absorptive thermal blocking filters for cryogenic microwave applications. The transmission line filter's input characteristic impedance is designed to match 50 Omega and its response has been validated from 0-to-50GHz. The observed return loss in the 0-to-20GHz design band is greater than 20 dB and shows graceful degradation with frequency. Design considerations and equations are provided that enable this approach to be scaled and modified for use in other applications.

  12. Impedance Matched Absorptive Thermal Blocking Filters

    NASA Technical Reports Server (NTRS)

    Wollack, E. J.; Chuss, D. T.; Rostem, K.; U-Yen, K.

    2014-01-01

    We have designed, fabricated and characterized absorptive thermal blocking filters for cryogenic microwave applications. The transmission line filter's input characteristic impedance is designed to match 50O and its response has been validated from 0-to-50GHz. The observed return loss in the 0-to-20GHz design band is greater than 20 dB and shows graceful degradation with frequency. Design considerations and equations are provided that enable this approach to be scaled and modified for use in other applications.

  13. Iterative design of one- and two-dimensional FIR digital filters. [Finite duration Impulse Response

    NASA Technical Reports Server (NTRS)

    Suk, M.; Choi, K.; Algazi, V. R.

    1976-01-01

    The paper describes a new iterative technique for designing FIR (finite duration impulse response) digital filters using a frequency weighted least squares approximation. The technique is as easy to implement (via FFT) and as effective in two dimensions as in one dimension, and there are virtually no limitations on the class of filter frequency spectra approximated. An adaptive adjustment of the frequency weight to achieve other types of design approximation such as Chebyshev type design is discussed.

  14. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Kelkar, S. S.; Lee, F. C.

    1983-01-01

    A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.

  15. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  16. Ultra-compact UHF Band-pass Filter Designed by Archimedes Spiral Capacitor and Shorted-loaded Stubs

    NASA Astrophysics Data System (ADS)

    Peng, Lin; Jiang, Xing

    2015-01-01

    UHF microstrip band-pass filters (BPFs) that much smaller than the referred BPFs are proposed in this communication. For the designing purpose of compactness, archimedes spiral capacitor and ground-loaded stubs are utilized to enhance capacitances and inductance of a filter. Two compact BPFs denoted as BPF 1 and BPF 2 are designed by applying these techniques. The size of BPF 1 and BPF 2 are 0.062 λg × 0.056 λg and 0.047 λg × 0.043 λg, respectively, where λg are guided wavelengths of the centre frequencies of the corresponding filters. The proposed filters were constructed and measured, and the measured results are in good agreement with the simulated ones.

  17. Optical Fourier filtering for whole lens assessment of progressive power lenses.

    PubMed

    Spiers, T; Hull, C C

    2000-07-01

    Four binary filter designs for use in an optical Fourier filtering set-up were evaluated when taking quantitative measurements and when qualitatively mapping the power variation of progressive power lenses (PPLs). The binary filters tested were concentric ring, linear grating, grid and "chevron" designs. The chevron filter was considered best for quantitative measurements since it permitted a vernier acuity task to be used for measuring the fringe spacing, significantly reducing errors, and it also gave information on the polarity of the lens power. The linear grating filter was considered best for qualitatively evaluating the power variation. Optical Fourier filtering and a Nidek automatic focimeter were then used to measure the powers in the distance and near portions of five PPLs of differing design. Mean measurement error was 0.04 D with a maximum value of 0.13 D. Good qualitative agreement was found between the iso-cylinder plots provided by the manufacturer and the Fourier filter fringe patterns for the PPLs indicating that optical Fourier filtering provides the ability to map the power distribution across the entire lens aperture without the need for multiple point measurements. Arguments are presented that demonstrate that it should be possible to derive both iso-sphere and iso-cylinder plots from the binary filter patterns.

  18. Low-loss interference filter arrays made by plasma-assisted reactive magnetron sputtering (PARMS) for high-performance multispectral imaging

    NASA Astrophysics Data System (ADS)

    Broßmann, Jan; Best, Thorsten; Bauer, Thomas; Jakobs, Stefan; Eisenhammer, Thomas

    2016-10-01

    Optical remote sensing of the earth from air and space typically utilizes several channels in the visible and near infrared spectrum. Thin-film optical interference filters, mostly of narrow bandpass type, are applied to select these channels. The filters are arranged in filter wheels, arrays of discrete stripe filters mounted in frames, or patterned arrays on a monolithic substrate. Such multi-channel filter assemblies can be mounted close to the detector, which allows a compact and lightweight camera design. Recent progress in image resolution and sensor sensitivity requires improvements of the optical filter performance. Higher demands placed on blocking in the UV and NIR and in between the spectral channels, in-band transmission and filter edge steepness as well as scattering lead to more complex filter coatings with thicknesses in the range of 10 - 25μm. Technological limits of the conventionally used ion-assisted evaporation process (IAD) can be overcome only by more precise and higher-energetic coating technologies like plasma-assisted reactive magnetron sputtering (PARMS) in combination with optical broadband monitoring. Optics Balzers has developed a photolithographic patterning process for coating thicknesses up to 15μm that is fully compatible with the advanced PARMS coating technology. This provides the possibility of depositing multiple complex high-performance filters on a monolithic substrate. We present an overview of the performance of recently developed filters with improved spectral performance designed for both monolithic filter-arrays and stripe filters mounted in frames. The pros and cons as well as the resulting limits of the filter designs for both configurations are discussed.

  19. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  20. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  1. An improved ultra-wideband bandpass filter design using split ring resonator with coupled microstrip line

    NASA Astrophysics Data System (ADS)

    Umeshkumar, Dubey Suhmita; Kumar, Manish

    2018-04-01

    This paper incorporates an improved design of Ultra Wideband Bandpass filter by using split ring resonators (SRR) along with the coupled microstrip lines. The use of split ring resonators and shunt step impedance open circuit stub enhances the stability due to transmission zeroes at the ends. The designing of filter and simulation of parameters is carried out using Ansoft's HFSS 13.0 software on RT/Duroid 6002 as a substrate with dielectric constant of 2.94. The design utilizes a frequency band from 22GHz to 29GHz. This band is reserved for Automotive Radar system and sensors as per FCC specifications. The proposed design demonstrates insertion loss less than 0.6dB and return loss better than 12dB at mid frequency i.e. 24.4GHz. The reflection coefficient shows high stability of about 12.47dB at mid frequency. The fractional bandwidth of the proposed filter is about 28.7% and size of filter design is small due to thickness of 0.127mm.

  2. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  3. Cryptosporidium: A Guide to Water Filters

    MedlinePlus

    ... label> Parasites Home A Guide to Water Filters Recommend on Facebook Tweet Share Compartir Filtering Tap ... absolute pore size of 1 micron or smaller. Filters designed to remove Crypto (any of the four ...

  4. Design, construction and operation of a new filter approach for treatment of surface waters in Southeast Asia

    NASA Astrophysics Data System (ADS)

    Frankel, R. J.

    1981-05-01

    A simple, inexpensive, and efficient method of water treatment for rural communities in Southeast Asia was developed using local materials as filter media. The filter utilizes coconut fiber and burnt rice husks in a two-stage filtering process designed as a gravityfed system without the need for backwashing, and eliminates in most cases the need of any chemicals. The first-stage filter with coconut fiber acts essentially as a substitute for the coagulation and sedimentation phases of conventional water-treatment plants. The second-stage filter, using burnt rice husks, is similar to slow sand filtration with the additional benefits of taste, color and odor removals through the absorption properties of the activated carbon in the medium. This paper reports on the design, construction costs, and operating results of several village size units in Thailand and in the Philippines.

  5. Digitally Controllable Current Amplifier and Current Conveyors in Practical Application of Controllable Frequency Filter

    NASA Astrophysics Data System (ADS)

    Polak, Josef; Jerabek, Jan; Langhammer, Lukas; Sotner, Roman; Dvorak, Jan; Panek, David

    2016-07-01

    This paper presents the simulations results in comparison with the measured results of the practical realization of the multifunctional second order frequency filter with a Digitally Adjustable Current Amplifier (DACA) and two Dual-Output Controllable Current Conveyors (CCCII +/-). This filter is designed for use in current mode. The filter was designed of the single input multiple outputs (SIMO) type, therefore it has only one input and three outputs with individual filtering functions. DACA element used in a newly proposed circuit is present in form of an integrated chip and the current conveyors are implemented using the Universal Current Conveyor (UCC) chip with designation UCC-N1B. Proposed frequency filter enables independent control of the pole frequency using parameters of two current conveyors and also independent control of the quality factor by change of a current gain of DACA.

  6. Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing

    2018-03-01

    As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.

  7. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  8. A high-power spatial filter for Thomson scattering stray light reduction

    NASA Astrophysics Data System (ADS)

    Levesque, J. P.; Litzner, K. D.; Mauel, M. E.; Maurer, D. A.; Navratil, G. A.; Pedersen, T. S.

    2011-03-01

    The Thomson scattering diagnostic on the High Beta Tokamak-Extended Pulse (HBT-EP) is routinely used to measure electron temperature and density during plasma discharges. Avalanche photodiodes in a five-channel interference filter polychromator measure scattered light from a 6 ns, 800 mJ, 1064 nm Nd:YAG laser pulse. A low cost, high-power spatial filter was designed, tested, and added to the laser beamline in order to reduce stray laser light to levels which are acceptable for accurate Rayleigh calibration. A detailed analysis of the spatial filter design and performance is given. The spatial filter can be easily implemented in an existing Thomson scattering system without the need to disturb the vacuum chamber or significantly change the beamline. Although apertures in the spatial filter suffer substantial damage from the focused beam, with proper design they can last long enough to permit absolute calibration.

  9. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.

  10. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.

  11. On the Global Regularity of a Helical-Decimated Version of the 3D Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Biferale, Luca; Titi, Edriss S.

    2013-06-01

    We study the global regularity, for all time and all initial data in H 1/2, of a recently introduced decimated version of the incompressible 3D Navier-Stokes (dNS) equations. The model is based on a projection of the dynamical evolution of Navier-Stokes (NS) equations into the subspace where helicity (the L 2-scalar product of velocity and vorticity) is sign-definite. The presence of a second (beside energy) sign-definite inviscid conserved quadratic quantity, which is equivalent to the H 1/2-Sobolev norm, allows us to demonstrate global existence and uniqueness, of space-periodic solutions, together with continuity with respect to the initial conditions, for this decimated 3D model. This is achieved thanks to the establishment of two new estimates, for this 3D model, which show that the H 1/2 and the time average of the square of the H 3/2 norms of the velocity field remain finite. Such two additional bounds are known, in the spirit of the work of H. Fujita and T. Kato (Arch. Ration. Mech. Anal. 16:269-315, 1964; Rend. Semin. Mat. Univ. Padova 32:243-260, 1962), to be sufficient for showing well-posedness for the 3D NS equations. Furthermore, they are directly linked to the helicity evolution for the dNS model, and therefore with a clear physical meaning and consequences.

  12. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillo, Andrew; Ricketts, Craig I.

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacitymore » of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)« less

  13. Hybrid Filter Membrane

    NASA Technical Reports Server (NTRS)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of dust particles on the filter surface and to facilitate dust removal with pulse or back airflow.

  14. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    PubMed

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  15. Novel programmable microwave photonic filter with arbitrary filtering shape and linear phase.

    PubMed

    Zhu, Xiaoqi; Chen, Feiya; Peng, Huanfa; Chen, Zhangyuan

    2017-04-17

    We propose and demonstrate a novel optical frequency comb (OFC) based microwave photonic filter which is able to realize arbitrary filtering shape with linear phase response. The shape of filter response is software programmable using finite impulse response (FIR) filter design method. By shaping the OFC spectrum using a programmable waveshaper, we can realize designed amplitude of FIR taps. Positive and negative sign of FIR taps are achieved by balanced photo-detection. The double sideband (DSB) modulation and symmetric distribution of filter taps are used to maintain the linear phase condition. In the experiment, we realize a fully programmable filter in the range from DC to 13.88 GHz. Four basic types of filters (lowpass, highpass, bandpass and bandstop) with different bandwidths, cut-off frequencies and central frequencies are generated. Also a triple-passband filter is realized in our experiment. To the best of our knowledge, it is the first demonstration of a programmable multiple passband MPF with linear phase response. The experiment shows good agreement with the theoretical result.

  16. A class of systolizable IIR digital filters and its design for proper scaling and minimum output roundoff noise

    NASA Technical Reports Server (NTRS)

    Lei, Shaw-Min; Yao, Kung

    1990-01-01

    A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.

  17. Economical Implementation of a Filter Engine in an FPGA

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.

    2009-01-01

    A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be advantageous to combine compact FPGA digital filter implementations with other application-specific logic implementations on single integrated-circuit chips. An FPGA could readily be tailored to implement a variety of filters because the filter coefficients would be loaded into memory at startup.

  18. Multiplier less high-speed squaring circuit for binary numbers

    NASA Astrophysics Data System (ADS)

    Sethi, Kabiraj; Panda, Rutuparna

    2015-03-01

    The squaring operation is important in many applications in signal processing, cryptography etc. In general, squaring circuits reported in the literature use fast multipliers. A novel idea of a squaring circuit without using multipliers is proposed in this paper. Ancient Indian method used for squaring decimal numbers is extended here for binary numbers. The key to our success is that no multiplier is used. Instead, one squaring circuit is used. The hardware architecture of the proposed squaring circuit is presented. The design is coded in VHDL and synthesised and simulated in Xilinx ISE Design Suite 10.1 (Xilinx Inc., San Jose, CA, USA). It is implemented in Xilinx Vertex 4vls15sf363-12 device (Xilinx Inc.). The results in terms of time delay and area is compared with both modified Booth's algorithm and squaring circuit using Vedic multipliers. Our proposed squaring circuit seems to have better performance in terms of both speed and area.

  19. Time-Reversal Based Range Extension Technique for Ultra-wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2009-04-16

    the transmitted waveform, then spectral mask, notch line of Arbitrary Notch Filter , the designed waveforms and multipath impulse response represented...400 Frequence (MHz) Figure 5.4: Spectral mask, notch line of Arbitrary Notch Filter , the designed waveforms and multipath impulse response...600 Frequence (MHz) Figure 5.7: Spectral mask, notch line of Arbitrary Notch Filter , the designed waveforms and multipath impulse response

  20. Design of recursive digital filters having specified phase and magnitude characteristics

    NASA Technical Reports Server (NTRS)

    King, R. E.; Condon, G. W.

    1972-01-01

    A method for a computer-aided design of a class of optimum filters, having specifications in the frequency domain of both magnitude and phase, is described. The method, an extension to the work of Steiglitz, uses the Fletcher-Powell algorithm to minimize a weighted squared magnitude and phase criterion. Results using the algorithm for the design of filters having specified phase as well as specified magnitude and phase compromise are presented.

  1. Design and implementation of a hybrid digital phase-locked loop with a TMS320C25: An application to a transponder receiver breadboard

    NASA Technical Reports Server (NTRS)

    Yeh, H.-G.; Nguyen, T. M.

    1994-01-01

    Design, modeling, analysis, and simulation of a phase-locked loop (PLL) with a digital loop filter are presented in this article. A TMS320C25 digital signal processor (DSP) is used to implement this digital loop filter. In order to keep the compatibility, the main design goal was to replace the analog PLL (APLL) of the Deep-Space Transponder (DST) receiver breadboard's loop filter with a digital loop filter without changing anything else. This replacement results in a hybrid digital PLL (HDPLL). Both the original APLL and the designed HDPLL are Type I second-order systems. The real-time performance of the HDPLL and the receiver is provided and evaluated.

  2. Creating Magic Squares.

    ERIC Educational Resources Information Center

    Lyon, Betty Clayton

    1990-01-01

    One method of making magic squares using a prolongated square is illustrated. Discussed are third-order magic squares, fractional magic squares, fifth-order magic squares, decimal magic squares, and even magic squares. (CW)

  3. Usefulness of Pulse Oximeter That Can Measure SpO2 to One Digit After Decimal Point.

    PubMed

    Yamamoto, Akihiro; Burioka, Naoto; Eto, Aritoshi; Amisaki, Takashi; Shimizu, Eiji

    2017-06-01

    Pulse oximeters are used to noninvasively measure oxygen saturation in arterial blood (SaO 2 ). Although arterial oxygen saturation measured by pulse oximeter (SpO 2 ) is usually indicated in 1% increments, the value of SaO 2 from arterial blood gas analysis is not an integer. We have developed a new pulse oximeter that can measure SpO 2 to one digit after the decimal point. The values of SpO 2 from the newly developed pulse oximeter are highly correlated with the values of SaO 2 from arterial blood gas analysis (SpO 2 = 0.899 × SaO 2 + 9.944, r = 0.887, P < 0.0001). This device may help improve the evaluation of pathological conditions in patients.

  4. Optical filter for highlighting spectral features part I: design and development of the filter for discrimination of human skin with and without an application of cosmetic foundation.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.

  5. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT: IN-DRAIN TREATMENT DEVICE. HYDRO INTERNATIONAL UP-FLO™ FILTER

    EPA Science Inventory

    Verification testing of the Hydro International Up-Flo™ Filter with one filter module and CPZ Mix™ filter media was conducted at the Penn State Harrisburg Environmental Engineering Laboratory in Middletown, Pennsylvania. The Up-Flo™ Filter is designed as a passive, modular filtr...

  6. Sand Type Filters for Swimming Pools. Standard No. 10, Revised October, 1966.

    ERIC Educational Resources Information Center

    National Sanitation Foundation, Ann Arbor, MI.

    Sand type filters are covered in this standard. The filters described are intended to be designed and used specifically for swimming pool water filtration, both public and residential. Included are the basic components which are a necessary part of the sand type filter such as filter housing, upper and lower distribution systems filter media,…

  7. Design of almost symmetric orthogonal wavelet filter bank via direct optimization.

    PubMed

    Murugesan, Selvaraaju; Tay, David B H

    2012-05-01

    It is a well-known fact that (compact-support) dyadic wavelets [based on the two channel filter banks (FBs)] cannot be simultaneously orthogonal and symmetric. Although orthogonal wavelets have the energy preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function.

  8. Planar Superconducting Millimeter-Wave/Terahertz Channelizing Filter

    NASA Technical Reports Server (NTRS)

    Ehsan, Negar; U-yen, Kongpop; Brown, Ari; Hsieh, Wen-Ting; Wollack, Edward; Moseley, Samuel

    2013-01-01

    This innovation is a compact, superconducting, channelizing bandpass filter on a single-crystal (0.45 m thick) silicon substrate, which operates from 300 to 600 GHz. This device consists of four channels with center frequencies of 310, 380, 460, and 550 GHz, with approximately 50-GHz bandwidth per channel. The filter concept is inspired by the mammalian cochlea, which is a channelizing filter that covers three decades of bandwidth and 3,000 channels in a very small physical space. By using a simplified physical cochlear model, and its electrical analog of a channelizing filter covering multiple octaves bandwidth, a large number of output channels with high inter-channel isolation and high-order upper stopband response can be designed. A channelizing filter is a critical component used in spectrometer instruments that measure the intensity of light at various frequencies. This embodiment was designed for MicroSpec in order to increase the resolution of the instrument (with four channels, the resolution will be increased by a factor of four). MicroSpec is a revolutionary wafer-scale spectrometer that is intended for the SPICA (Space Infrared Telescope for Cosmology and Astrophysics) Mission. In addition to being a vital component of MicroSpec, the channelizing filter itself is a low-resolution spectrometer when integrated with only an antenna at its input, and a detector at each channel s output. During the design process for this filter, the available characteristic impedances, possible lumped element ranges, and fabrication tolerances were identified for design on a very thin silicon substrate. Iterations between full-wave and lumped-element circuit simulations were performed. Each channel s circuit was designed based on the availability of characteristic impedances and lumped element ranges. This design was based on a tabular type bandpass filter with no spurious harmonic response. Extensive electromagnetic modeling for each channel was performed. Four channels, with 50-GHz bandwidth, were designed, each using multiple transmission line media such as microstrip, coplanar waveguide, and quasi-lumped components on 0.45- m thick silicon. In the design process, modeling issues had to be overcome. Due to the extremely high frequencies, very thin Si substrate, and the superconducting metal layers, most commercially available software fails in various ways. These issues were mitigated by using alternative software that was capable of handling them at the expense of greater simulation time. The design of on-chip components for the filter characterization, such as a broadband antenna, Wilkinson power dividers, attenuators, detectors, and transitions has been completed.

  9. Phage-based biomolecular filter for the capture of bacterial pathogens in liquid streams

    NASA Astrophysics Data System (ADS)

    Du, Songtao; Chen, I.-Hsuan; Horikawa, Shin; Lu, Xu; Liu, Yuzhe; Wikle, Howard C.; Suh, Sang Jin; Chin, Bryan A.

    2017-05-01

    This paper investigates a phage-based biomolecular filter that enables the evaluation of large volumes of liquids for the presence of small quantities of bacterial pathogens. The filter is a planar arrangement of phage-coated, strip-shaped magnetoelastic (ME) biosensors (4 mm × 0.8 mm × 0.03 mm), magnetically coupled to a filter frame structure, through which a liquid of interest flows. This "phage filter" is designed to capture specific bacterial pathogens and allow non-specific debris to pass, eliminating the common clogging issue in conventional bead filters. ANSYS Maxwell was used to simulate the magnetic field pattern required to hold ME biosensors densely and to optimize the frame design. Based on the simulation results, a phage filter structure was constructed, and a proof-in-concept experiment was conducted where a Salmonella solution of known concentration were passed through the filter, and the number of captured Salmonella was quantified by plate counting.

  10. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  11. X-band preamplifier filter

    NASA Technical Reports Server (NTRS)

    Manshadi, F.

    1986-01-01

    A low-loss bandstop filter designed and developed for the Deep Space Network's 34-meter high-efficiency antennas is described. The filter is used for protection of the X-band traveling wave masers from the 20-kW transmitter signal. A combination of empirical and theoretical techniques was employed as well as computer simulation to verify the design before fabrication.

  12. Cryogenic filter wheel design for an infrared instrument

    NASA Astrophysics Data System (ADS)

    Azcue, Joaquín.; Villanueva, Carlos; Sánchez, Antonio; Polo, Cristina; Reina, Manuel; Carretero, Angel; Torres, Josefina; Ramos, Gonzalo; Gonzalez, Luis M.; Sabau, Maria D.; Najarro, Francisco; Pintado, Jesús M.

    2014-09-01

    In the last two decades, Spain has built up a strong IR community which has successfully contributed to space instruments, reaching Co-PI level in the SPICA mission (Space Infrared Telescope for Cosmology and Astrophysics). Under the SPICA mission, INTA, focused on the SAFARI instrument requirements but highly adaptable to other missions has designed a cryogenic low dissipation filter wheel with six positions, taking as starting point the past experience of the team with the OSIRIS instrument (ROSETTA mission) filter wheels and adapting the design to work at cryogenic temperatures. One of the main goals of the mechanism is to use as much as possible commercial components and test them at cryogenic temperature. This paper is focused on the design of the filter wheel, including the material selection for each of the main components of the mechanism, the design of elastic mount for the filter assembly, a positioner device designed to provide positional accuracy and repeatability to the filter, allowing the locking of the position without dissipation. In order to know the position of the wheel on every moment a position sensor based on a Hall sensor was developed. A series of cryogenic tests have been performed in order to validate the material configuration selected, the ball bearing lubrication and the selection of the motor. A stepper motor characterization campaign was performed including heat dissipation measurements. The result is a six position filter wheel highly adaptable to different configurations and motors using commercial components. The mechanism was successfully tested at INTA facilities at 20K at breadboard level.

  13. Fixed-frequency and Frequency-agile (au, HTS) Microstrip Bandstop Filters for L-band Applications

    NASA Technical Reports Server (NTRS)

    Saenz, Eileen M.; Subramanyam, Guru; VanKeuls, Fred W.; Chen, Chonglin; Miranda, Felix A.

    2001-01-01

    In this work, we report on the performance of a highly selective, compact 1.83 x 2.08 cm(exp 2) (approx. 0.72 x 0.82 in(exp 2) microstrip line bandstop filter of YBa2CU3O(7-delta) (YBCO) on LaAlO3 (LAO) substrate. The filter is designed for a center frequency of 1.623 GHz for a bandwidth at 3 dB from reference baseline of less than 5.15 MHz, and a bandstop rejection of 30 dB or better. The design and optimization of the filter was performed using Zeland's IE3D circuit simulator. The optimized design was used to fabricate gold (Au) and High-Temperature Superconductor (HTS) versions of the filter. We have also studied an electronically tunable version of the same filter. Tunability of the bandstop characteristics is achieved by the integration of a thin film conductor (Au or HTS) and the nonlinear dielectric ferroelectric SrTiO3 in a conductor/ferroelectric/dielectric modified microstrip configuration. The performance of these filters and comparison with the simulated data will be presented.

  14. Spatial filters for high-peak-power multistage laser amplifiers.

    PubMed

    Potemkin, A K; Barmashova, T V; Kirsanov, A V; Martyanov, M A; Khazanov, E A; Shaykin, A A

    2007-07-10

    We describe spatial filters used in a Nd:glass laser with an output pulse energy up to 300 J and a pulse duration of 1 ns. This laser is designed for pumping of a chirped-pulse optical parametric amplifier. We present data required to choose the shape and diameter of a spatial filter lens, taking into account aberrations caused by spherical surfaces. Calculation of the optimal pinhole diameter is presented. Design features of the spatial filters and the procedure of their alignment are discussed in detail.

  15. Filter line wiring designs in aircraft

    NASA Astrophysics Data System (ADS)

    Rowe, Richard M.

    1990-10-01

    The paper presents a harness design using a filter-line wire technology and appropriate termination methods to help meet high-energy radiated electromagnetic field (HERF) requirements for protection against the adverse effects of EMI on electrical and avionic systems. Filter-line interconnect harnessing systems discussed consist of high-performance wires and cables; when properly wired they suppress conducted and radiated EMI above 100 MHz. Filter-line termination devices include backshell adapters, braid splicers, and shield terminators providing 360-degree low-impedance terminations and enhancing maintainability of the system.

  16. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  17. Acoustic Wave Filter Technology-A Review.

    PubMed

    Ruppel, Clemens C W

    2017-09-01

    Today, acoustic filters are the filter technology to meet the requirements with respect to performance dictated by the cellular phone standards and their form factor. Around two billion cellular phones are sold every year, and smart phones are of a very high percentage of approximately two-thirds. Smart phones require a very high number of filter functions ranging from the low double-digit range up to almost triple digit numbers in the near future. In the frequency range up to 1 GHz, surface acoustic wave (SAW) filters are almost exclusively employed, while in the higher frequency range, bulk acoustic wave (BAW) and SAW filters are competing for their shares. Prerequisites for the success of acoustic filters were the availability of high-quality substrates, advanced and highly reproducible fabrication technologies, optimum filter techniques, precise simulation software, and advanced design tools that allow the fast and efficient design according to customer specifications. This paper will try to focus on innovations leading to high volume applications of intermediate frequency (IF) and radio frequency (RF) acoustic filters, e.g., TV IF filters, IF filters for cellular phones, and SAW/BAW RF filters for the RF front-end of cellular phones.

  18. Indium phosphide all air-gap Fabry-Pérot filters for near-infrared spectroscopic applications

    NASA Astrophysics Data System (ADS)

    Ullah, A.; Butt, M. A.; Fomchenkov, S. A.; Khonina, S. N.

    2016-08-01

    Food quality can be characterized by noninvasive techniques such as spectroscopy in the Near Infrared wavelength range. For example, 930 -1450 nm wavelength range can be used to detect diseases and differentiate between meat samples. Miniaturization of such NIR spectrometers is useful for quick and mobile characterization of food samples. Spectrometers can be miniaturized, without compromising the spectral resolution, using Fabry-Pérot (FP) filters consisting of two highly reflecting mirrors with a central cavity in between. The most commonly used mirrors in the design of FP filters are Distributed Bragg Reflections (DBRs) consisting of alternating high and low refractive index material pairs, due to their high reflectivity compared to metal mirrors. However, DBRs have high reflectivity for a selected range of wavelengths known as the stopband of the DBR. This range is usually much smaller than the sensitivity range of the spectrometer detector. Therefore, a bandpass filter is usually required to restrict wavelengths outside the stopband of the FP DBRs. Such bandpass filters are difficult to design and implement. Alternatively, high index contrast materials must be can be used to broaden the stopband width of the FP DBRs. In this work, Indium phosphide all air-gap filters are proposed in conjunction with InGaAs based detectors. The designed filter has a wide stopband covering the entire InGaAs detector sensitivity range. The filter can be tuned in the 950-1450 nm with single mode operation. The designed filter can hence be used for noninvasive meat quality control.

  19. Model Adaptation for Prognostics in a Particle Filtering Framework

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, Kai Frank

    2011-01-01

    One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

  20. Design of adaptive control systems by means of self-adjusting transversal filters

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.

    1986-01-01

    The design of closed-loop adaptive control systems based on nonparametric identification was addressed. Implementation is by self-adjusting Least Mean Square (LMS) transversal filters. The design concept is Model Reference Adaptive Control (MRAC). Major issues are to preserve the linearity of the error equations of each LMS filter, and to prevent estimation bias that is due to process or measurement noise, thus providing necessary conditions for the convergence and stability of the control system. The controlled element is assumed to be asymptotically stable and minimum phase. Because of the nonparametric Finite Impulse Response (FIR) estimates provided by the LMS filters, a-priori information on the plant model is needed only in broad terms. Following a survey of control system configurations and filter design considerations, system implementation is shown here in Single Input Single Output (SISO) format which is readily extendable to multivariable forms. In extensive computer simulation studies the controlled element is represented by a second-order system with widely varying damping, natural frequency, and relative degree.

  1. Filters for the International Solar Terrestrial Physics (ISTP) mission far ultraviolet imager

    NASA Technical Reports Server (NTRS)

    Zukic, Muamer; Torr, Douglas G.; Kim, Jongmin; Spann, James F.; Torr, Marsha R.

    1993-01-01

    The far ultraviolet (FUV) imager for the International Solar Terrestrial Physics (ISTP) mission is designed to image four features of the aurora: O I lines at 130.4 nm and 135.6 nm and the N2 Lyman-Birge-Hopfield (LBH) bands between 140 nm - 160 nm (LBH long) and 160 nm - 180 nm (LBH long). In this paper we report the design and fabrication of narrow-band and broadband filters for the ISTP FUV imager. Narrow-band filters designed and fabricated for the O I lines have a bandwidth of less than 5 nm and a peak transmittance of 23.9 percent and 38.3 percent at 130.4 nm and 135.6 nm, respectively. Broadband filters designed and fabricated for LBH bands have the transmittance close to 60 percent. Blocking of out-of-band wavelengths for all filters is better than 5x10(exp -3) percent with the transmittance at 121.6 nm of less than 10(exp -6) percent.

  2. Tunable Optical Filters for Space Exploration

    NASA Technical Reports Server (NTRS)

    Crandall, Charles; Clark, Natalie; Davis, Patricia P.

    2007-01-01

    Spectrally tunable liquid crystal filters provide numerous advantages and several challenges in space applications. We discuss the tradeoffs in design elements for tunable liquid crystal birefringent filters with special consideration required for space exploration applications. In this paper we present a summary of our development of tunable filters for NASA space exploration. In particular we discuss the application of tunable liquid crystals in guidance navigation and control in space exploration programs. We present a summary of design considerations for improving speed, field of view, transmission of liquid crystal tunable filters for space exploration. In conclusion, the current state of the art of several NASA LaRC assembled filters is presented and their performance compared to the predicted spectra using our PolarTools modeling software.

  3. FILTER PLANT DESIGN FOR ASBESTOS FIBER REMOVAL

    EPA Science Inventory

    Water filtration plants used to remove asbestos fibers should be designed to produce filtered water with very low turbidity (0.10 ntu or lower). Flexibility in plant operation, especially with respect to conditioning raw water for filtration, is a key design factor. Preconditioni...

  4. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  5. A novel low-complexity digital filter design for wearable ECG devices

    PubMed Central

    Mehrnia, Alireza

    2017-01-01

    Wearable and implantable Electrocardiograph (ECG) devices are becoming prevailing tools for continuous real-time personal health monitoring. The ECG signal can be contaminated by various types of noise and artifacts (e.g., powerline interference, baseline wandering) that must be removed or suppressed for accurate ECG signal processing. Limited device size, power consumption and cost are critical issues that need to be carefully considered when designing any portable health monitoring device, including a battery-powered ECG device. This work presents a novel low-complexity noise suppression reconfigurable finite impulse response (FIR) filter structure for wearable ECG and heart monitoring devices. The design relies on a recently introduced optimally-factored FIR filter method. The new filter structure and several of its useful features are presented in detail. We also studied the hardware complexity of the proposed structure and compared it with the state-of-the-art. The results showed that the new ECG filter has a lower hardware complexity relative to the state-of-the-art ECG filters. PMID:28384272

  6. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  7. A novel low-complexity digital filter design for wearable ECG devices.

    PubMed

    Asgari, Shadnaz; Mehrnia, Alireza

    2017-01-01

    Wearable and implantable Electrocardiograph (ECG) devices are becoming prevailing tools for continuous real-time personal health monitoring. The ECG signal can be contaminated by various types of noise and artifacts (e.g., powerline interference, baseline wandering) that must be removed or suppressed for accurate ECG signal processing. Limited device size, power consumption and cost are critical issues that need to be carefully considered when designing any portable health monitoring device, including a battery-powered ECG device. This work presents a novel low-complexity noise suppression reconfigurable finite impulse response (FIR) filter structure for wearable ECG and heart monitoring devices. The design relies on a recently introduced optimally-factored FIR filter method. The new filter structure and several of its useful features are presented in detail. We also studied the hardware complexity of the proposed structure and compared it with the state-of-the-art. The results showed that the new ECG filter has a lower hardware complexity relative to the state-of-the-art ECG filters.

  8. Modernisation of the Narod fluxgate electronics at Budkov Geomagnetic Observatory

    NASA Astrophysics Data System (ADS)

    Vlk, Michal

    2013-04-01

    From the signal point of view, fluxgate unit is a low-frequency parametric up-convertor where the output signal is picked up in bands near second harmonic of the pump frequency fp (sometimes called idler for historic reasons) and purity of idler is augmented by orthogonal construction of the pump and pick-up coil. In our concept, the pump source uses Heegner quartz oscillator near 8 MHz, synchronous divider to 16 kHz (fp) and switched current booster. Rectangular pulse is used for feeding the original ferroresonant pump source with neutralizing transformer in the case of symmetric shielded cabling. Input transformer has split primary winding for using symmetrical shielded input cabling and secondary winding tuned by polystyrol capacitor and loaded by inverting integrator bridged by capacitor. This structure behaves like resistor cooled to low temperature. Next stage is bandpass filter (derivator) with a gain tuned to 2 fp with leaky FDNRs followed by current booster. Another part of the system is low-noise peak elimination and bias circuit. Heart of the system is a 120-V precision source which uses 3.3-V Zener diode chain - thermistor bridge in the feedback. Peak elimination circuit logics consists of the envelope detector, comparators, asynchronous counter in hardwired logics, set of weighted resistor chains and discrete MOS switches in current-mode. All HV components use airy montage to prevent the ground-leak. After 200 m long coaxial line, the signal is galvanically separated by transformer and fed into A/D converter, which is ordinary HD audio (96 kHz) soundcard. Real sample rate is constructed by a-posteriori data processing when statistic properties of the incoming sample are known. The sampled signal is band-pass filtered with a 200-Hz filter centered at 2 fp. The signal is then fed through a first-order allpass centered at 2 fp. The result approximates Hilbert transform sufficiently good for detecting the envelope via square sum-root rule. The signal is further decimated via IIR filters to sample-rate 187.5 Hz. Raw instrument data files are saved hourly in floating-point binary files and are marked by time stamps obtained from NTP server. A-posteriory processing of (plesiochronous) instrument data consists of downsampling by IIRs to 12 Hz, irrational (time-mark driven) upsampling to 13 Hz and then using the INTERMAGNET standard FIR filter (5 sec to 1 min) to obtain 1-min data. Because the range of the signal processing system is about 60 nT (range of the peak elimination circuit is 3.8 uT), the resulting magnetograms look like the La Cour ones.

  9. Determination of tailored filter sets to create rayfiles including spatial and angular resolved spectral information.

    PubMed

    Rotscholl, Ingo; Trampert, Klaus; Krüger, Udo; Perner, Martin; Schmidt, Franz; Neumann, Cornelius

    2015-11-16

    To simulate and optimize optical designs regarding perceived color and homogeneity in commercial ray tracing software, realistic light source models are needed. Spectral rayfiles provide angular and spatial varying spectral information. We propose a spectral reconstruction method with a minimum of time consuming goniophotometric near field measurements with optical filters for the purpose of creating spectral rayfiles. Our discussion focuses on the selection of the ideal optical filter combination for any arbitrary spectrum out of a given filter set by considering measurement uncertainties with Monte Carlo simulations. We minimize the simulation time by a preselection of all filter combinations, which bases on factorial design.

  10. Imaging spectrometer using a liquid crystal tunable filter

    NASA Astrophysics Data System (ADS)

    Chrien, Thomas G.; Chovit, Christopher; Miller, Peter J.

    1993-09-01

    A demonstration imaging spectrometer using a liquid crystal tunable filter (LCTF) was built and tested on a hot air balloon platform. The LCTF is a tunable polarization interference or Lyot filter. The LCTF enables a small, light weight, low power, band sequential imaging spectrometer design. An overview of the prototype system is given along with a description of balloon experiment results. System model performance predictions are given for a future LCTF based imaging spectrometer design. System design considerations of LCTF imaging spectrometers are discussed.

  11. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit.

    PubMed

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-29

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.

  12. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit

    NASA Astrophysics Data System (ADS)

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-01

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.

  13. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  14. An ultra-low-power filtering technique for biomedical applications.

    PubMed

    Zhang, Tan-Tan; Mak, Pui-In; Vai, Mang-I; Mak, Peng-Un; Wan, Feng; Martins, R P

    2011-01-01

    This paper describes an ultra-low-power filtering technique for biomedical applications designated as T-wave sensing in heart-activities detection systems. The topology is based on a source-follower-based Biquad operating in the sub-threshold region. With the intrinsic advantages of simplicity and high linearity of the source-follower, ultra-low-cutoff filtering can be achieved, simultaneously with ultra low power and good linearity. An 8(th)-order 2.4-Hz lowpass filter design example optimized in a 0.35-μm CMOS process was designed achieving over 85-dB dynamic range, 74-dB stopband attenuation and consuming only 0.36 nW at a 3-V supply.

  15. Design of a terahertz photonic crystal transmission filter containing ferroelectric material.

    PubMed

    King, Tzu-Chyang; Chen, Jian-Jie; Chang, Kai-Chun; Wu, Chien-Jang

    2016-10-10

    The ferroelectric material KTaO3 (KTO) has a very high refractive index, which is advantageous to the photonic crystal (PC) design. KTO polycrystalline crystal has a high extinction coefficient. In this work, we perform a theoretical study of the transmission properties of a PC bandpass filter made of polycrystalline KTO at terahertz (THz) frequencies. Our results show that the defect modes of usual PC narrowband filters no longer exist because of the existence of the high loss. We provide a new PC structure for the high-extinction materials and show that it has defect modes in its transmittance spectra, providing a possible bandpass filter design in the THz region.

  16. Photonic compressed sensing nyquist folding receiver

    DTIC Science & Technology

    2017-09-01

    filter . Two independent photonic receiver architectures are designed and analyzed over the course of this research. Both receiver designs are...undersamples the signals using an opti- cal modulator configuration at 1550 nm and collects the detected samples in a low pass interpolation filter ...Electronic Intelligence EW Electronic Warfare FM Frequency Modulated LNA Low Noise Amplifier LPF Low Pass Filter MZI Mach-Zehnder Interferometer NYFR Nyquist

  17. Evolution of an interfacial crack on the concrete-embankment boundary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Antoun, Tarabay; Kanarska, Yuliya

    2013-07-10

    Failure of a dam can have subtle beginnings. A small crack or dislocation at the interface of the concrete dam and the surrounding embankment soil initiated by, for example, a seismic or an explosive event can lead to a catastrophic failure of the dam. The dam may ‘self-rehabilitate’ if a properly designed granular filter is engineered around the embankment. Currently, the design criteria for such filters have only been based on experimental studies. We demonstrate the numerical prediction of filter effectiveness at the soil grain scale. This joint LLNL-ERDC basic research project, funded by the Department of Homeland Security’s Sciencemore » and Technology Directorate (DHS S&T), consists of validating advanced high performance computer simulations of soil erosion and transport of grain- and dam-scale models to detailed centrifuge and soil erosion tests. Validated computer predictions highlight that a resilient filter is consistent with the current design specifications for dam filters. These predictive simulations, unlike the design specifications, can be used to assess filter success or failure under different soil or loading conditions and can lead to meaningful estimates of the timing and nature of full-scale dam failure.« less

  18. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  19. The Bellevue Classification System: nursing's voice upon the library shelves*†

    PubMed Central

    Mages, Keith C

    2011-01-01

    This article examines the inspiration, construction, and meaning of the Bellevue Classification System (BCS), created during the 1930s for use in the Bellevue School of Nursing Library. Nursing instructor Ann Doyle, with assistance from librarian Mary Casamajor, designed the BCS after consulting with library leaders and examining leading contemporary classification systems, including the Dewey Decimal Classification and Library of Congress, Ballard, and National Health Library classification systems. A close textual reading of the classes, subclasses, and subdivisions of these classification systems against those of the resulting BCS, reveals Doyle's belief that the BCS was created not only to organize the literature, but also to promote the burgeoning intellectualism and professionalism of early twentieth-century American nursing. PMID:21243054

  20. Design considerations for near-infrared filter photometry: effects of noise sources and selectivity.

    PubMed

    Tarumi, Toshiyasu; Amerov, Airat K; Arnold, Mark A; Small, Gary W

    2009-06-01

    Optimal filter design of two-channel near-infrared filter photometers is investigated for simulated two-component systems consisting of an analyte and a spectrally overlapping interferent. The degree of overlap between the analyte and interferent bands is varied over three levels. The optimal design is obtained for three cases: a source or background flicker noise limited case, a shot noise limited case, and a detector noise limited case. Conventional photometers consist of narrow-band optical filters with their bands located at discrete wavelengths. However, the use of broadband optical filters with overlapping responses has been proposed to obtain as much signal as possible from a weak and broad analyte band typical of near-infrared absorptions. One question regarding the use of broadband optical filters with overlapping responses is the selectivity achieved by such filters. The selectivity of two-channel photometers is evaluated on the basis of the angle between the analyte and interferent vectors in the space spanned by the relative change recorded for each of the two detector channels. This study shows that for the shot noise limited or detector noise limited cases, the slight decrease in selectivity with the use of broadband optical filters can be compensated by the higher signal-to-noise ratio afforded by the use of such filters. For the source noise limited case, the best quantitative results are obtained with the use of narrow-band non-overlapping optical filters.

  1. Giardia and Drinking Water from Private Wells

    MedlinePlus

    ... boiling water is using a point-of-use filter. Not all home water filters remove Giardia . Filters that are designed to remove the parasite should ... learn more, visit CDC’s A Guide to Water Filters page. As you consider ways to disinfect your ...

  2. 50 CFR 216.93 - Tracking and verification program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... canning company in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment... short tons to the fourth decimal, ocean area of capture (ETP, western Pacific, Indian, eastern and...

  3. 50 CFR 216.93 - Tracking and verification program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... canning company in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment... short tons to the fourth decimal, ocean area of capture (ETP, western Pacific, Indian, eastern and...

  4. Reasoning strategies with rational numbers revealed by eye tracking.

    PubMed

    Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J

    2017-07-01

    Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.

  5. CORRIGENDUM: Universal variational expansion for high-precision bound-state calculations in three-body systems. Applications to weakly bound, adiabatic and two-shell cluster systems

    NASA Astrophysics Data System (ADS)

    Bailey, David H.; Frolov, Alexei M.

    2003-12-01

    Since the above paper was published we have received a suggestion from T K Rebane that our variational energy, -402.261 928 652 266 220 998 au, for the 3S(L = 0) state from table 4 (right-hand column) is wrong in the fourth and fifth decimal digits. Our original variational energies were E(2000) = -402.192 865 226 622 099 583 au and E(3000) = -402.192 865 226 622 099 838 au. Unfortunately, table 4 contains a simple typographic error. The first two digits after the decimal point (26) in the published energies must be removed. Then the results exactly coincide with the original energies. These digits (26) were left in table 4 from the original version, which also included the 2S(L = 0) states of the helium-muonic atoms. A similar typographic error was found in table 4 of another paper by A M Frolov (2001 J. Phys. B: At. Mol. Opt. Phys. 34 3813). The computed ground state energy for the ppµ muonic molecular ion was -0.494 386 820 248 934 546 94 mau. In table 4 of that paper the first figure '8' (fifth digit after the decimal point) was lost from the energy value presented in this table. We wish to thank T K Rebane of the Fock Physical Institute in St Petersburg for pointing out the misprint related to the helium(4)-muonic atom.

  6. The Renormalization Group and Its Applications to Generating Coarse-Grained Models of Large Biological Molecular Systems.

    PubMed

    Koehl, Patrice; Poitevin, Frédéric; Navaza, Rafael; Delarue, Marc

    2017-03-14

    Understanding the dynamics of biomolecules is the key to understanding their biological activities. Computational methods ranging from all-atom molecular dynamics simulations to coarse-grained normal-mode analyses based on simplified elastic networks provide a general framework to studying these dynamics. Despite recent successes in studying very large systems with up to a 100,000,000 atoms, those methods are currently limited to studying small- to medium-sized molecular systems due to computational limitations. One solution to circumvent these limitations is to reduce the size of the system under study. In this paper, we argue that coarse-graining, the standard approach to such size reduction, must define a hierarchy of models of decreasing sizes that are consistent with each other, i.e., that each model contains the information of the dynamics of its predecessor. We propose a new method, Decimate, for generating such a hierarchy within the context of elastic networks for normal-mode analysis. This method is based on the concept of the renormalization group developed in statistical physics. We highlight the details of its implementation, with a special focus on its scalability to large systems of up to millions of atoms. We illustrate its application on two large systems, the capsid of a virus and the ribosome translation complex. We show that highly decimated representations of those systems, containing down to 1% of their original number of atoms, still capture qualitatively and quantitatively their dynamics. Decimate is available as an OpenSource resource.

  7. Virtual screening filters for the design of type II p38 MAP kinase inhibitors: a fragment based library generation approach.

    PubMed

    Badrinarayan, Preethi; Sastry, G Narahari

    2012-04-01

    In this work, we introduce the development and application of a three-step scoring and filtering procedure for the design of type II p38 MAP kinase leads using allosteric fragments extracted from virtual screening hits. The design of the virtual screening filters is based on a thorough evaluation of docking methods, DFG-loop conformation, binding interactions and chemotype specificity of the 138 p38 MAP kinase inhibitors from Protein Data Bank bound to DFG-in and DFG-out conformations using Glide, GOLD and CDOCKER. A 40 ns molecular dynamics simulation with the apo, type I with DFG-in and type II with DFG-out forms was carried out to delineate the effects of structural variations on inhibitor binding. The designed docking-score and sub-structure filters were first tested on a dataset of 249 potent p38 MAP kinase inhibitors from seven diverse series and 18,842 kinase inhibitors from PDB, to gauge their capacity to discriminate between kinase and non-kinase inhibitors and likewise to selectively filter-in target-specific inhibitors. The designed filters were then applied in the virtual screening of a database of ten million (10⁷) compounds resulting in the identification of 100 hits. Based on their binding modes, 98 allosteric fragments were extracted from the hits and a fragment library was generated. New type II p38 MAP kinase leads were designed by tailoring the existing type I ATP site binders with allosteric fragments using a common urea linker. Target specific virtual screening filters can thus be easily developed for other kinases based on this strategy to retrieve target selective compounds. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Design certification tests: High Pressure Oxygen Filter (HPOF) program. Summary report

    NASA Technical Reports Server (NTRS)

    Smith, I. D.

    1976-01-01

    Design and acceptance certification test procedures and results are presented for a high pressure oxygen filter developed to protect the sealing surfaces in emergency oxygen systems. Equipment specifications are included.

  9. Solid state electro-optic color filter and iris

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A pair of solid state electro-optic filters (SSEF) in a binocular holder were designed and fabricated for evaluation of field sequential stereo TV applications. The electronic circuitry for use with the stereo goggles was designed and fabricated, requiring only an external video input. A polarizing screen suitable for attachment to various size TV monitors for use in conjunction with the stereo goggles was designed and fabricated. An improved engineering model 2 filter was fabricated using the bonded holder technique developed previously and integrated to a GCTA color TV camera. An engineering model color filter was fabricated and assembled using PLZT control elements. In addition, a ruggedized holder assembly was designed, fabricated and tested. This assembly provides electrical contacts, high voltage protection, and support for the fragile PLZT disk, and also permits mounting and optical alignment of the associated polarizers.

  10. Evaluation of beam delivery and ripple filter design for non-isocentric proton and carbon ion therapy.

    PubMed

    Grevillot, L; Stock, M; Vatnitsky, S

    2015-10-21

    This study aims at selecting and evaluating a ripple filter design compatible with non-isocentric proton and carbon ion scanning beam treatment delivery for a compact nozzle. The use of non-isocentric treatments when the patient is shifted as close as possible towards the nozzle exit allows for a reduction in the air gap and thus an improvement in the quality of scanning proton beam treatment delivery. Reducing the air gap is less important for scanning carbon ions, but ripple filters are still necessary for scanning carbon ion beams to reduce the number of energy steps required to deliver homogeneous SOBP. The proper selection of ripple filters also allows a reduction in the possible transverse and depth-dose inhomogeneities that could appear in non-isocentric conditions in particular. A thorough review of existing ripple filter designs over the past 16 years is performed and a design for non-isocentric treatment delivery is presented. A unique ripple filter quality index (QIRiFi) independent of the particle type and energy and representative of the ratio between energy modulation and induced scattering is proposed. The Bragg peak width evaluated at the 80% dose level (BPW80) is proposed to relate the energy modulation of the delivered Bragg peaks and the energy layer step size allowing the production of homogeneous SOBP. Gate/Geant4 Monte Carlo simulations have been validated for carbon ion and ripple filter simulations based on measurements performed at CNAO and subsequently used for a detailed analysis of the proposed ripple filter design. A combination of two ripple filters in a series has been validated for non-isocentric delivery and did not show significant transverse and depth-dose inhomogeneities. Non-isocentric conditions allow a significant reduction in the spot size at the patient entrance (up to 350% and 200% for protons and carbon ions with range shifter, respectively), and therefore in the lateral penumbra in the patients.

  11. Cryogenic metal mesh bandpass filters for submillimeter astronomy

    NASA Technical Reports Server (NTRS)

    Dragovan, M.

    1984-01-01

    The design and performance of a tunable double-half-wave bandpass filter centered at 286 microns (Delta lambda/lambda = 0.16) and operating at cryogenic temperatures (for astronomy applications) are presented. The operating principle is explained, and the fabrication of the device, which comprises two identical mutually coupled Fabry-Perot filters with electroformed Ni-mesh reflectors and is tuned by means of variable spacers, is described. A drawing of the design and graphs of computed and measured performance are provided. Significantly improved bandpass characteristics are obtained relative to the single Fabry-Perot filter.

  12. Design of tunable thermo-optic C-band filter based on coated silicon slab

    NASA Astrophysics Data System (ADS)

    Pinhas, Hadar; Malka, Dror; Danan, Yossef; Sinvani, Moshe; Zalevsky, Zeev

    2018-03-01

    Optical filters are required to have narrow band-pass filtering in the spectral C-band for applications such as signal tracking, sub-band filtering or noise suppression. These requirements lead to a variety of filters such as Mach-Zehnder interferometer inter-leaver in silica, which offer thermo-optic effect for optical switching, however, without proper thermal and optical efficiency. In this paper we propose tunable thermo-optic filtering device based on coated silicon slab resonator with increased Q-factor for the C-band optical switching. The device can be designed either for long range wavelength tuning of for short range with increased wavelength resolution. Theoretical examination of the thermal parameters affecting the filtering process is shown together with experimental results. Proper channel isolation with an extinction ratio of 20dBs is achieved with spectral bandpass width of 0.07nm.

  13. Filter Leaf. Operational Control Tests for Wastewater Treatment Facilities. Instructor's Manual [and] Student Workbook.

    ERIC Educational Resources Information Center

    Wooley, John F.

    In the operation of vacuum filters and belt filters, it is desirable to evaluate the performance of different types of filter media and conditioning processes. The filter leaf test, which is used to evaluate these items, is described. Designed for individuals who have completed National Pollutant Discharge Elimination System (NPDES) level 1…

  14. Diatomite Type Filters for Swimming Pools. Standard No. 9, Revised October, 1966.

    ERIC Educational Resources Information Center

    National Sanitation Foundation, Ann Arbor, MI.

    Pressure and vacuum diatomite type filters are covered in this standard. The filters herein described are intended to be designed and used specifically for swimming pool water filtration, both public and residential. Included are the basic components which are a necessary part of the diatomite type filter such as filter housing, element supports,…

  15. Quick-Change Optical-Filter Holder

    NASA Technical Reports Server (NTRS)

    Leone, Peter

    1988-01-01

    Dark slide and interlock protect against ambient light. Quick-change filter holder contains interlocking mechanism preventing simultaneous removal of both dark slide and filter drawer. Designed for use with Band pass optical filters in 10 channels leading to photomultiplier tubes in water-vapor lidar/ozone instrument, mechanism can be modified to operate in other optical systems requiring optical change in filters.

  16. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  17. Design of surface acoustic wave filters for the multiplex transmission system of multilevel inverter circuits

    NASA Astrophysics Data System (ADS)

    Kubo, Keita; Kanai, Nanae; Kobayashi, Fumiya; Goka, Shigeyoshi; Wada, Keiji; Kakio, Shoji

    2017-07-01

    We designed surface acoustic wave (SAW) filters for a multiplex transmission system of multilevel inverter circuits, and applied them to a single-phase three-level inverter. To reduce the transmission delay time of the SAW filters, a four-channel SAW filter array was fabricated and its characteristics were measured. The delay time of the SAW filters was <350 ns, and the delay time difference was reduced to ≤184 ns, less than half that previously reported. The SAW filters withstood up to 990 V, which is sufficient for the inverters used in most domestic appliances. A single-phase three-level inverter with the fabricated SAW filters worked with a total delay time shorter than our target delay time of 2.5 µs. The delay time difference of the proposed system was 0.26 µs, which is sufficient for preventing the inverter circuit from short-circuiting. The SAW filters controlled a multilevel inverter system with simple signal wiring and high dielectric withstanding voltages.

  18. Spectroscopic imaging using acousto-optic tunable filters

    NASA Astrophysics Data System (ADS)

    Bouhifd, Mounir; Whelan, Maurice

    2007-07-01

    We report on novel hyper-spectral imaging filter-modules based on acousto-optic tuneable filters (AOTF). The AOTF functions as a full-field tuneable bandpass filter which offers fast continuous or random access tuning with high filtering efficiency. Due to the diffractive nature of the device, the unfiltered zero-order and the filtered first-order images are geometrically separated. The modules developed exploit this feature to simultaneously route both the transmitted white-light image and the filtered fluorescence image to two separate cameras. Incorporation of prisms in the optical paths and careful design of the relay optics in the filter module have overcome a number of aberrations inherent to imaging through AOTFs, leading to excellent spatial resolution. A number of practical uses of this technique, both for in vivo auto-fluorescence endoscopy and in vitro fluorescence microscopy were demonstrated. We describe the operational principle and design of recently improved prototype instruments for fluorescence-based diagnostics and demonstrate their performance by presenting challenging hyper-spectral fluorescence imaging applications.

  19. Coplanar Waveguide Radial Line Double Stub and Application to Filter Circuits

    NASA Technical Reports Server (NTRS)

    Simons, R. N.; Taub, S. R.

    1993-01-01

    Coplanar waveguide (CPW) and grounded coplanar waveguide (GCPW) radial line double stub resonators are experimentally characterized with respect to stub radius and sector angle. A simple closed-form design equation, which predicts the resonance radius of the stub, is presented. Use of a double stub resonator as a lowpass filter or as a harmonic suppression filter is demonstrated, and design rules are given.

  20. Air-Gapped Structures as Magnetic Elements for Use in Power Processing Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.

    1977-01-01

    Methodical approaches to the design of inductors for use in LC filters and dc-to-dc converters using air gapped magnetic structures are presented. Methods for the analysis and design of full wave rectifier LC filter circuits operating with the inductor current in both the continuous conduction and the discontinuous conduction modes are also described. In the continuous conduction mode, linear circuit analysis techniques are employed, while in the case of the discontinuous mode, the method of analysis requires computer solutions of the piecewise linear differential equations which describe the filter in the time domain. Procedures for designing filter inductors using air gapped cores are presented. The first procedure requires digital computation to yield a design which is optimized in the sense of minimum core volume and minimum number of turns. The second procedure does not yield an optimized design as defined above, but the design can be obtained by hand calculations or with a small calculator. The third procedure is based on the use of specially prepared magnetic core data and provides an easy way to quickly reach a workable design.

  1. A reflective-type, quasi-optical metasurface filter

    NASA Astrophysics Data System (ADS)

    Sima, Boyu; Momeni Hasan Abadi, Seyed Mohamad Amin; Behdad, Nader

    2017-08-01

    We introduce a new technique for designing quasi-optical, reflective-type spatial filters. The proposed filter is a reflective metasurface with a one dimensional, frequency-dependent phase gradient along the aperture. By careful design of each unit cell of the metasurface, the phase shift gradient provided by the adjacent unit cells can be engineered to steer the beam towards a desired, anomalous reflection direction over the passband region of the filter. Outside of that range, the phase shift gradient required to produce the anomalous reflection is not present and hence, the wave is reflected towards the specular reflection direction. This way, the metasurface acts as a reflective filter in a quasi-optical system where the detector is placed along the direction of anomalous reflection. The spectral selectivity of this filter is determined by the frequency dispersion of the metasurface's phase response. Based on this principle, a prototype of the proposed metasurface filter, which operates at 10 GHz and has a bandwidth of 3%, is designed. The device is modeled using a combination of theoretical analysis using the phased-array theory and full-wave electromagnetic simulations. A prototype of this device is also fabricated and characterized using a free-space measurement system. Experimental results agree well with the simulations.

  2. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  3. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  4. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  5. Rugate filter for light-trapping in solar cells.

    PubMed

    Fahr, Stephan; Ulbrich, Carolin; Kirchartz, Thomas; Rau, Uwe; Rockstuhl, Carsten; Lederer, Falk

    2008-06-23

    We suggest a design for a coating that could be applied on top of any solar cell having at least one diffusing surface. This coating acts as an angle and wavelength selective filter, which increases the average path length and absorptance at long wavelengths without altering the solar cell performance at short wavelengths. The filter design is based on a continuous variation of the refractive index in order to minimize undesired reflection losses. Numerical procedures are used to optimize the filter for a 10 microm thick monocrystalline silicon solar cell, which lifts the efficiency above the Auger limit for unconcentrated illumination. The feasibility to fabricate such filters is also discussed, considering a finite available refractive index range.

  6. Synthesis of correlation filters: a generalized space-domain approach for improved filter characteristics

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.

    1990-12-01

    Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.

  7. Investigation of Dual-Mode Microstrip Bandpass Filter Based on SIR Technique

    PubMed Central

    Mezaal, Yaqeen S.; Ali, Jawad K.

    2016-01-01

    In this paper, a new bandpass filter design has been presented using simple topology of stepped impedance square loop resonator. The proposed bandpass filter has been simulated and fabricated using a substrate with an insulation constant of 10.8, thickness of 1.27mm and loss tangent of 0.0023 at center frequency of 5.8 GHz. The simulation results have been evaluated using Sonnet simulator that is extensively adopted in microwave analysis and implementation. The output frequency results demonstrated that the proposed filter has high-quality frequency responses in addition to isolated second harmonic frequency. Besides, this filter has very small surface area and perceptible narrow band response features that represent the conditions of recent wireless communication systems. Various filter specifications have been compared with different magnitudes of perturbation element dimension. Furthermore, phase scattering response and current intensity distribution of the proposed filter have been discussed. The simulated and experimental results are well-matched. Lastly, the features of the proposed filter have been compared with other designed microstrip filters in the literature. PMID:27798675

  8. WE-F-16A-02: Design, Fabrication, and Validation of a 3D-Printed Proton Filter for Range Spreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remmes, N; Courneyea, L; Corner, S

    2014-06-15

    Purpose: To design, fabricate and test a 3D-printed filter for proton range spreading in scanned proton beams. The narrow Bragg peak in lower-energy synchrotron-based scanned proton beams can result in longer treatment times for shallow targets due to energy switching time and plan quality degradation due to minimum monitor unit limitations. A filter with variable thicknesses patterned on the same scale as the beam's lateral spot size will widen the Bragg peak. Methods: The filter consists of pyramids dimensioned to have a Gaussian distribution in thickness. The pyramids are 2.5mm wide at the base, 0.6 mm wide at the peak,more » 5mm tall, and are repeated in a 2.5mm pseudo-hexagonal lattice. Monte Carlo simulations of the filter in a proton beam were run using TOPAS to assess the change in depth profiles and lateral beam profiles. The prototypes were constrained to a 2.5cm diameter disk to allow for micro-CT imaging of promising prototypes. Three different 3D printers were tested. Depth-doses with and without the prototype filter were then measured in a ~70MeV proton beam using a multilayer ion chamber. Results: The simulation results were consistent with design expectations. Prototypes printed on one printer were clearly unacceptable on visual inspection. Prototypes on a second printer looked acceptable, but the micro-CT image showed unacceptable voids within the pyramids. Prototypes from the third printer appeared acceptable visually and on micro-CT imaging. Depth dose scans using the prototype from the third printer were consistent with simulation results. Bragg peak width increased by about 3x. Conclusions: A prototype 3D printer pyramid filter for range spreading was successfully designed, fabricated and tested. The filter has greater design flexibility and lower prototyping and production costs compared to traditional ridge filters. Printer and material selection played a large role in the successful development of the filter.« less

  9. 50 CFR 216.93 - Tracking and verification program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment of ETP..., dressed, gilled and gutted, other), weight in short tons to the fourth decimal, ocean area of capture (ETP...

  10. 50 CFR 216.93 - Tracking and verification program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment of ETP..., dressed, gilled and gutted, other), weight in short tons to the fourth decimal, ocean area of capture (ETP...

  11. 25 CFR 169.11 - Affidavit and certificate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of road construction, and a certificate by the State or county engineer or other authorized State or... decimals, the line of route for which the right-of-way application is made. (b) Maps covering roads built...

  12. Design of a robust thin-film interference filter for erbium-doped fiber amplifier gain equalization

    NASA Astrophysics Data System (ADS)

    Verly, Pierre G.

    2002-06-01

    Gain-flattening filters (GFFs) are key wavelength division multiplexing components in fiber-optics telecommunications. Challenging issues in the design of thin-film GFFs were recently the subject of a contest organized at the 2001 Conference on Optical Interference Coatings. The interest and main difficulty of the proposed problem was to minimize the sensitivity of a GFF to simulated fabrication errors. A high-yield solution and its design philosophy are described. The approach used to control the filter robustness is explained and illustrated by numerical results.

  13. Design of a robust thin-film interference filter for erbium-doped fiber amplifier gain equalization.

    PubMed

    Verly, Pierre G

    2002-06-01

    Gain-flattening filters (GFFs) are key wavelength division multiplexing components in fiber-optics telecommunications. Challenging issues in the design of thin-film GFFs were recently the subject of a contest organized at the 2001 Conference on Optical Interference Coatings. The interest and main difficulty of the proposed problem was to minimize the sensitivity of a GFF to simulated fabrication errors. A high-yield solution and its design philosophy are described. The approach used to control the filter robustness is explained and illustrated by numerical results.

  14. Computational tools for multi-linked flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.

    1990-01-01

    A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.

  15. CBR/TIC Filter Design and Evaluation

    DTIC Science & Technology

    2006-12-29

    CBR/TIC Filters Aged CK Filter Life Predicitions.................................................................. 29 Figure 30. CBR/TIC Filters...potting adhesive. Neoprene rubber face gaskets are adhered to each end of the particulate filter for airtight sealing. Figure 16. CBR/TIC HEPA...attached with adhesive to the bottom end cover and is spaced by the compression pads on the top end. EPDM foam compression pads are placed on top

  16. Different depth intermittent sand filters for laboratory treatment of synthetic wastewater with concentrations close to measured septic tank effluent.

    PubMed

    Rodgers, M; Walsh, G; Healy, M G

    2011-01-01

    The objective of this study was to apply hydraulic and chemical oxygen demand (COD) loading rates at the upper limits of the design criteria for buried sand filters to test the sand filter depth design criteria. Over a 274-day study duration, synthetic effluent with a strength of domestic wastewater was intermittently dosed onto two sand filters of 0.2 m diameter, with depths of 0.3 and 0.4 m. Hydraulic and organic carbon loading rates of 105 L m(-2) d(-1) and 40 g COD m(-2) d(-1), respectively, were applied to the filters. The filters did not clog and had good effluent removal capabilities for 274 and 190 days, respectively. However, the 0.3 m-deep filter did experience a reduced performance towards the end of the study period. In the 0.3 and 0.4 m-deep filters, the effluent COD and SS concentrations were less than 86 and 31 mg L(-1), respectively, and nitrification was nearly complete in both these columns. Ortho-phosphorus (PO(4)-P) removal in fine sand and laterite 'upflow' filters, receiving effluent from the 0.3 m-deep filter, was 10% and 44%, respectively.

  17. A robust spatial filtering technique for multisource localization and geoacoustic inversion.

    PubMed

    Stotts, S A

    2005-07-01

    Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.

  18. Spectral analysis and filtering techniques in digital spatial data processing

    USGS Publications Warehouse

    Pan, Jeng-Jong

    1989-01-01

    A filter toolbox has been developed at the EROS Data Center, US Geological Survey, for retrieving or removing specified frequency information from two-dimensional digital spatial data. This filter toolbox provides capabilities to compute the power spectrum of a given data and to design various filters in the frequency domain. Three types of filters are available in the toolbox: point filter, line filter, and area filter. Both the point and line filters employ Gaussian-type notch filters, and the area filter includes the capabilities to perform high-pass, band-pass, low-pass, and wedge filtering techniques. These filters are applied for analyzing satellite multispectral scanner data, airborne visible and infrared imaging spectrometer (AVIRIS) data, gravity data, and the digital elevation models (DEM) data. -from Author

  19. Arbitrary frequency tunable radio frequency bandpass filter based on nano-patterned Permalloy coplanar waveguide (invited)

    NASA Astrophysics Data System (ADS)

    Wang, Tengxing; Rahman, B. M. Farid; Peng, Yujia; Xia, Tian; Wang, Guoan

    2015-05-01

    A well designed coplanar waveguide (CPW) based center frequency tunable bandpass filter (BPF) at 4 GHz enabled with patterned Permalloy (Py) thin film has been implemented. The operating frequency of BPF is tunable with only DC current without the use of any external magnetic field. Electromagnetic bandgap resonators structure is adopted in the BPF and thus external DC current can be applied between the input and output of the filter for tuning of Py permeability. Special configurations of resonators with multiple narrow parallel sections have been considered for larger inductance tenability; the tunability of CPW transmission lines of different widths with patterned Py thin film on the top of the signal lines is compared and measured. Py thin film patterned as bars is deposited on the top of the multiple narrow parallel sections of the designed filter. No extra area is required for the designed filter configuration. Filter is measured and results show that its center frequency could be tuned from 4 GHz to 4.02 GHz when the DC current is applied from 0 mA to 400 mA.

  20. Destriping of Landsat MSS images by filtering techniques

    USGS Publications Warehouse

    Pan, Jeng-Jong; Chang, Chein-I

    1992-01-01

    : The removal of striping noise encountered in the Landsat Multispectral Scanner (MSS) images can be generally done by using frequency filtering techniques. Frequency do~ain filteri~g has, how~ver, se,:era~ prob~ems~ such as storage limitation of data required for fast Fourier transforms, nngmg artl~acts appe~nng at hlgh-mt,enslty.dlscontinuities, and edge effects between adjacent filtered data sets. One way for clrcu~,,:entmg the above difficulties IS, to design a spatial filter to convolve with the images. Because it is known that the,stnpmg a.lways appears at frequencies of 1/6, 1/3, and 1/2 cycles per line, it is possible to design a simple one-dimensIOnal spat~a~ fll,ter to take advantage of this a priori knowledge to cope with the above problems. The desired filter is the type of ~mlte Impuls~ response which can be designed by a linear programming and Remez's exchange algorithm coupled ~lth an adaptIve tec,hmque. In addition, a four-step spatial filtering technique with an appropriate adaptive approach IS also presented which may be particularly useful for geometrically rectified MSS images.

  1. Development of a filter to prevent infections with spore-forming bacteria in injecting drug users.

    PubMed

    Alhusein, Nour; Scott, Jenny; Kasprzyk-Hordern, Barbara; Bolhuis, Albert

    2016-12-01

    In heroin injectors, there have been a number of outbreaks caused by spore-forming bacteria, causing serious infections such as anthrax or botulism. These are, most likely, caused by injecting contaminated heroin, and our aim was to develop a filter that efficiently removes these bacteria and is also likely to be acceptable for use by people who inject drugs (i.e. quick, simple and not spoil the hit). A prototype filter was designed and different filter membranes were tested to assess the volume of liquid retained, filtration time and efficiency of the filter at removing bacterial spores. Binding of active ingredients of heroin to different types of membrane filters was determined using a highly sensitive analytical chemistry technique. Heroin samples that were tested contained up to 580 bacteria per gramme, with the majority being Bacillus spp., which are spore-forming soil bacteria. To remove these bacteria, a prototype filter was designed to fit insulin-type syringes, which are commonly used by people who inject drugs (PWIDs). Efficient filtration of heroin samples was achieved by combining a prefilter to remove particles and a 0.22 μm filter to remove bacterial spores. The most suitable membrane was polyethersulfone (PES). This membrane had the shortest filtration time while efficiently removing bacterial spores. No or negligible amounts of active ingredients in heroin were retained by the PES membrane. This study successfully produced a prototype filter designed to filter bacterial spores from heroin samples. Scaled up production could produce an effective harm reduction tool, especially during outbreaks such as occurred in Europe in 2009/10 and 2012.

  2. Sequential Blood Filtration for Extracorporeal Circulation: Initial Results from a Proof-of-Concept Prototype.

    PubMed

    Herbst, Daniel P

    2014-09-01

    Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient's systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26-33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique.

  3. Sequential Blood Filtration for Extracorporeal Circulation: Initial Results from a Proof-of-Concept Prototype

    PubMed Central

    Herbst, Daniel P.

    2014-01-01

    Abstract: Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient’s systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26–33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique. PMID:26357790

  4. Raman Laser Spectrometer internal Optical Head current status: opto-mechanical redesign to minimize the excitation laser trace

    NASA Astrophysics Data System (ADS)

    Sanz, Miguel; Ramos, Gonzalo; Moral, Andoni; Pérez, Carlos; Belenguer, Tomás; del Rosario Canchal, María; Zuluaga, Pablo; Rodriguez, Jose Antonio; Santiago, Amaia; Rull, Fernando; Instituto Nacional de Técnica Aeroespacial (INTA); Ingeniería de Sistemas para la Defesa de España S.A. (ISDEFE)

    2016-10-01

    Raman Laser Spectrometer (RLS) is the Pasteur Payload instruments of the ExoMars mission, within the ESA's Aurora Exploration Programme, that will perform for the first time in an out planetary mission Raman spectroscopy. RLS is composed by SPU (Spectrometer Unit), iOH (Internal Optical Head), and ICEU (Instrument Control and Excitation Unit). iOH focuses the excitation laser on the samples (excitation path), and collects the Raman emission from the sample (collection path, composed on collimation system and filtering system). The original design presented a high laser trace reaching to the detector, and although a certain level of laser trace was required for calibration purposes, the high level degrades the Signal to Noise Ratio confounding some Raman peaks.The investigation revealing that the laser trace was not properly filtered as well as the iOH opto-mechanical redesign are reported on. After the study of the Long Pass Filters Optical Density (OD) as a function of the filtering stage to the detector distance, a new set of filters (Notch filters) was decided to be evaluated. Finally, and in order to minimize the laser trace, a new collection path design (mainly consisting on that the collimation and filtering stages are now separated in two barrels, and on the kind of filters to be used) was required. Distance between filters and collimation stage first lens was increased, increasing the OD. With this new design and using two Notch filters, the laser trace was reduced to assumable values, as can be observed in the functional test comparison also reported on this paper.

  5. The Application of the FDTD Method to Millimeter-Wave Filter Circuits Including the Design and Analysis of a Compact Coplanar

    NASA Technical Reports Server (NTRS)

    Oswald, J. E.; Siegel, P. H.

    1994-01-01

    The finite difference time domain (FDTD) method is applied to the analysis of microwave, millimeter-wave and submillimeter-wave filter circuits. In each case, the validity of this method is confirmed by comparison with measured data. In addition, the FDTD calculations are used to design a new ultra-thin coplanar-strip filter for feeding a THz planar-antenna mixer.

  6. Low power adder based auditory filter architecture.

    PubMed

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  7. GaN-based metamaterial terahertz bandpass filter design: tunability and ultra-broad passband attainment.

    PubMed

    Khodaee, M; Banakermani, M; Baghban, H

    2015-10-10

    Engineering metamaterial-based devices such as terahertz bandpass filters (BPFs) play a definitive role in advancement of terahertz technology. In this article, we propose a design procedure to obtain a considerably broadband terahertz BPF at a normal incidence; it shows promising filtering characteristics, including a wide passband of ∼1.34  THz at a central frequency of 1.17 THz, a flat top in a broad band, and high transmission, compared to previous reports. Then, exploiting the voltage-dependent carrier density control in an AlGaN/GaN heterostructure with a Schottky gate configuration, we investigate the tuning of the transmission properties in a narrow-band terahertz filter. A combination of the ultra-wide, flat-top BPF in series with the tunable, narrow band filter designed in the current study offers the ability to tune the desired resonance frequency along with high out-of-band rejection and the suppression of unwanted resonances in a large spectral range. The proposed structure exhibits a frequency tunability of 103 GHz for a voltage change between -8 and 2 V, and a transmission amplitude change of ∼0.51. This scheme may open up a route for the improved design of terahertz filters and modulators.

  8. Evolution of an Interfacial Crack on the Concrete Embankment Boundary

    NASA Astrophysics Data System (ADS)

    Smith, J.; Ezzedine, S. M.; Lomov, I.; Kanarska, Y.; Antoun, T.; Glascoe, L. G.; Hall, R. L.; Woodson, S. C.

    2013-12-01

    Failure of a dam can have subtle beginnings: a small crack or dislocation at the interface of the concrete dam and the surrounding embankment soil initiated by a seismic event, for example, can: a) result in creating gaps between the concrete dam and the lateral embankments; b) initiate internal erosion of embankment; and c) lead to a catastrophic failure of the dam. The dam may ';self-rehabilitate' if a properly designed granular filter is engineered around the embankment. Currently, the design criteria for such filters have only been based on experimental studies. We demonstrate the numerical prediction of filter effectiveness at the soil grain scale and relate it to the larger dam scale. Validated computer predictions highlight that a resilient (or durable) filter is consistent with the current design specifications for dam filters. These predictive simulations, unlike the design specifications, can be used to assess filter success or failure under different soil or loading conditions and can lead to meaningful estimates of the timing and nature of full-scale dam failure. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security (DHS), Science and Technology Directorate, Homeland Security Advanced Research Projects Agency (HSARPA).

  9. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  10. System-level analysis and design for RGB-NIR CMOS camera

    NASA Astrophysics Data System (ADS)

    Geelen, Bert; Spooren, Nick; Tack, Klaas; Lambrechts, Andy; Jayapala, Murali

    2017-02-01

    This paper presents system-level analysis of a sensor capable of simultaneously acquiring both standard absorption based RGB color channels (400-700nm, 75nm FWHM), as well as an additional NIR channel (central wavelength: 808 nm, FWHM: 30nm collimated light). Parallel acquisition of RGB and NIR info on the same CMOS image sensor is enabled by monolithic pixel-level integration of both a NIR pass thin film filter and NIR blocking filters for the RGB channels. This overcomes the need for a standard camera-level NIR blocking filter to remove the NIR leakage present in standard RGB absorption filters from 700-1000nm. Such a camera-level NIR blocking filter would inhibit the acquisition of the NIR channel on the same sensor. Thin film filters do not operate in isolation. Rather, their performance is influenced by the system context in which they operate. The spectral distribution of light arriving at the photo diode is shaped a.o. by the illumination spectral profile, optical component transmission characteristics and sensor quantum efficiency. For example, knowledge of a low quantum efficiency (QE) of the CMOS image sensor above 800nm may reduce the filter's blocking requirements and simplify the filter structure. Similarly, knowledge of the incoming light angularity as set by the objective lens' F/# and exit pupil location may be taken into account during the thin film's optimization. This paper demonstrates how knowledge of the application context can facilitate filter design and relax design trade-offs and presents experimental results.

  11. A microprocessor based anti-aliasing filter for a PCM system

    NASA Technical Reports Server (NTRS)

    Morrow, D. C.; Sandlin, D. R.

    1984-01-01

    Described is the design and evaluation of a microprocessor based digital filter. The filter was made to investigate the feasibility of a digital replacement for the analog pre-sampling filters used in telemetry systems at the NASA Ames-Dryden Flight Research Facility (DFRF). The digital filter will utilize an Intel 2920 Analog Signal Processor (ASP) chip. Testing includes measurements of: (1) the filter frequency response and, (2) the filter signal resolution. The evaluation of the digital filter was made on the basis of circuit size, projected environmental stability and filter resolution. The 2920 based digital filter was found to meet or exceed the pre-sampling filter specifications for limited signal resolution applications.

  12. A VHDL Core for Intrinsic Evolution of Discrete Time Filters with Signal Feedback

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Dutton, Kenneth

    2005-01-01

    The design of an Evolvable Machine VHDL Core is presented, representing a discrete-time processing structure capable of supporting control system applications. This VHDL Core is implemented in an FPGA and is interfaced with an evolutionary algorithm implemented in firmware on a Digital Signal Processor (DSP) to create an evolvable system platform. The salient features of this architecture are presented. The capability to implement IIR filter structures is presented along with the results of the intrinsic evolution of a filter. The robustness of the evolved filter design is tested and its unique characteristics are described.

  13. Analysis and design of planar waveguide elements for use in filters and sensors

    NASA Astrophysics Data System (ADS)

    Chen, Guangzhou

    In this dissertation we present both theoretical analysis and practical design considerations for planar optical waveguide devices. The analysis takes into account both transverse dimensions of the waveguides and is based on supermode theory combined with the resonance method for the determination of the propagation constants and field profiles of the supermodes. An improved accuracy has been achieved by including corrections due to the fields in the corner regions of the waveguides using perturbation theory. We analyze in detail two particular devices, an optical filter/combiner and an optical sensor. An optical wavelength filter/combiner is a common element in an integrated optical circuit. A new "bend free" filter/combiner is proposed and analyzed. The new wavelength filter consists of only straight parallel channels, which considerably simplify both the analysis and fabrication of the device. We show in detail how the operation of the device depends upon each of the design parameters. The intrinsic power loss in the proposed filter/combiner is minimized. The optical sensor is another important device and the sensitivity of measurement is an important issue in its design. Two operating mechanisms used in prior optical sensors are evanescent wave sensing or surface plasmon excitation. In this dissertation, we present a sensor with a directional coupler structure in which a measurand to be detected is interfaced with one side of the cladding. The analysis shows that it is possible to make a high resolution device by adjusting the design parameters. The dimensions and materials used in an optimized design are presented.

  14. Designing Birefringent Filters For Solid-State Lasers

    NASA Technical Reports Server (NTRS)

    Monosmith, Bryan

    1992-01-01

    Mathematical model enables design of filter assembly of birefringent plates as integral part of resonator cavity of tunable solid-state laser. Proper design treats polarization eigenstate of entire resonator as function of wavelength. Program includes software modules for variety of optical elements including Pockels cell, laser rod, quarter- and half-wave plates, Faraday rotator, and polarizers.

  15. Remainder Wheels and Group Theory

    ERIC Educational Resources Information Center

    Brenton, Lawrence

    2008-01-01

    Why should prospective elementary and high school teachers study group theory in college? This paper examines applications of abstract algebra to the familiar algorithm for converting fractions to repeating decimals, revealing ideas of surprising substance beneath an innocent facade.

  16. Technical Mathematics.

    ERIC Educational Resources Information Center

    Flannery, Carol A.

    This manuscript provides information and problems for teaching mathematics to vocational education students. Problems reflect applications of mathematical concepts to specific technical areas. The materials are organized into six chapters. Chapter 1 covers basic arithmetic, including fractions, decimals, ratio and proportions, percentages, and…

  17. The finite scaling for S = 1 XXZ chains with uniaxial single-ion-type anisotropy

    NASA Astrophysics Data System (ADS)

    Wang, Honglei; Xiong, Xingliang

    2014-03-01

    The scaling behavior of criticality for spin-1 XXZ chains with uniaxial single-ion-type anisotropy is investigated by employing the infinite matrix product state representation with the infinite time evolving block decimation method. At criticality, the accuracy of the ground state of a system is limited by the truncation dimension χ of the local Hilbert space. We present four evidences for the scaling of the entanglement entropy, the largest eigenvalue of the Schmidt decomposition, the correlation length, and the connection between the actual correlation length ξ and the energy. The result shows that the finite scalings are governed by the central charge of the critical system. Also, it demonstrates that the infinite time evolving block decimation algorithm by the infinite matrix product state representation can be a quite accurate method to simulate the critical properties at criticality.

  18. Pectin methyl esterase and natural microflora of fresh mixed orange and carrot juice treated with pulsed electric fields.

    PubMed

    Rodrigo, D; Barbosa-Cánovas, G V; Martínez, A; Rodrigo, M

    2003-12-01

    The effects of pulsed electric fields (PEFs) on pectin methyl esterase (PME), molds and yeast, and total flora in fresh (nonpasteurized) mixed orange and carrot juice were studied. The PEF effect was more extensive when juices with high levels of initial PME activity were subjected to treatment and when PEF treatment (at 25 kV/cm for 340 micros) was combined with a moderate temperature (63 degrees C), with the maximum level of PME inactivation being 81.4%. These conditions produced 3.7 decimal reductions in molds and yeast and 2.4 decimal reductions in total flora. Experimental inactivation data for PME, molds and yeast, and total flora were fitted to Bigelow, Hülsheger, and Weibull inactivation models by nonlinear regression. The best fit (lowest mean square error) was obtained with the Weibull model.

  19. On-chip copper-dielectric interference filters for manufacturing of ambient light and proximity CMOS sensors.

    PubMed

    Frey, Laurent; Masarotto, Lilian; D'Aillon, Patrick Gros; Pellé, Catherine; Armand, Marilyn; Marty, Michel; Jamin-Mornet, Clémence; Lhostis, Sandrine; Le Briz, Olivier

    2014-07-10

    Filter technologies implemented on CMOS image sensors for spectrally selective applications often use a combination of on-chip organic resists and an external substrate with multilayer dielectric coatings. The photopic-like and near-infrared bandpass filtering functions respectively required by ambient light sensing and user proximity detection through time-of-flight can be fully integrated on chip with multilayer metal-dielectric filters. Copper, silicon nitride, and silicon oxide are the materials selected for a technological proof-of-concept on functional wafers, due to their immediate availability in front-end semiconductor fabs. Filter optical designs are optimized with respect to specific performance criteria, and the robustness of the designs regarding process errors are evaluated for industrialization purposes.

  20. Laser designator protection filter for see-spot thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe

    2012-06-01

    In some cases the FLIR has an open window in the 1.06 micrometer wavelength range; this capability is called 'see spot' and allows seeing a laser designator spot using the FLIR. A problem arises when the returned laser energy is too high for the camera sensitivity, and therefore can cause damage to the sensor. We propose a non-linear, solid-state dynamic filter solution protecting from damage in a passive way. Our filter blocks the transmission, only if the power exceeds a certain threshold as opposed to spectral filters that block a certain wavelength permanently. In this paper we introduce the Wideband Laser Protection Filter (WPF) solution for thermal imaging systems possessing the ability to see the laser spot.

  1. Intermediate-Band Photometric Luminosity Descrimination for M Stars

    NASA Astrophysics Data System (ADS)

    Robertson, T. H.; Furiak, N. M.

    1995-12-01

    Synthetic photometry has been used to design an intermediate-band filter to be used with CCD cameras to facilitate the luminosity classification of M stars. Spectrophotometric data published by Gunn & Stryker (1983) were used to test various bandwidths and centers. Based on these calculations an intermediate-band filter has been purchased. This filter is being used in conjunction with standard BVRI filters to test its effectiveness in luminosity classification of M stars having a wide range of temperatures and different chemical compositions. The results of the theoretical calculations, filter design specifications and preliminary results of the testing program are presented. This research is supported in part by funds provided by Ball State University, The Fund for Astrophysical Research and the Indiana Academy of Science.

  2. Optimal design of active EMC filters

    NASA Astrophysics Data System (ADS)

    Chand, B.; Kut, T.; Dickmann, S.

    2013-07-01

    A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

  3. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  4. Quantum image median filtering in the spatial domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande; Xiao, Hong

    2018-03-01

    Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.

  5. Broadband notch filter design for millimeter-wave plasma diagnostics.

    PubMed

    Furtula, V; Michelsen, P K; Leipold, F; Salewski, M; Korsholm, S B; Meo, F; Nielsen, S K; Stejner, M; Moseev, D; Johansen, T

    2010-10-01

    Notch filters are integrated in plasma diagnostic systems to protect millimeter-wave receivers from intensive stray radiation. Here we present a design of a notch filter with a center frequency of 140 GHz, a rejection bandwidth of ∼900 MHz, and a typical insertion loss below 2 dB in the passband of ±9 GHz. The design is based on a fundamental rectangular waveguide with eight cylindrical cavities coupled by T-junction apertures formed as thin slits. Parameters that affect the notch performance such as physical lengths and conductor materials are discussed. The excited resonance mode in the cylindrical cavities is the fundamental TE(11). The performance of the constructed filter is measured using a vector network analyzer monitoring a total bandwidth of 30 GHz. We compare the measurements with numerical simulations.

  6. Design and performance of a high-Tc superconductor coplanar waveguide filter

    NASA Technical Reports Server (NTRS)

    Chew, Wilbert; Riley, A. L.; Rascoe, Daniel L.; Hunt, Brian D.; Foote, Marc C.; Cooley, Thomas W.; Bajuk, Louis J.

    1991-01-01

    The design of a coplanar waveguide low-pass filter made of YBa2Cu3O(7-delta) (YBCO) on an LaAlO3 substrate is described. Measurements were incorporated into simple models for microwave CAD analysis to develop a final design. The patterned and packaged coplanar waveguide low-pass filter of YBCO, with dimensions suited for integrated circuits, exhibited measured insertion losses when cooled in liquid nitrogen superior to those of a similarly cooled thin-film copper filter throughout the 0 to 9.5 GHz passband. Coplanar waveguide models for use with thin-film normal metal (with thickness either greater or less than the skin depth) and YBCO are discussed and used to compare the losses of the measured YBCO and copper circuits.

  7. Compact triple band-stop filter using novel epsilon-shaped metamaterial with lumped capacitor

    NASA Astrophysics Data System (ADS)

    Ali, W. A. E.; Hamdalla, M. Z. M.

    2018-04-01

    This paper presents the design of a novel epsilon-shaped metamaterial unit cell structure that is applicable for single-band and multi-band applications. A closed-form formulas to control the resonance frequencies of the proposed design are included. The proposed unit cell, which exhibits negative permeability at its frequency bands, is etched from the ground plane to form a band-stop filter. The filter design is constructed to validate the band-notched characteristics of the proposed unit cell. A lumped capacitor is inserted for size reduction purpose in addition to multi-resonance generation. The fundamental resonance frequency is translated from 3.62 GHz to 2.45 GHz, which means that the filter size will be more compact (more than 32% size reduction). The overall size of the proposed filter is 13 × 6 × 1.524 mm3, where the electrical size is 0.221λg × 0.102λg × 0.026λg at the lower frequency band (2.45 GHz). Two other resonance frequencies are generated at 5.3 GHz and 9.2 GHz, which confirm the multi-band behavior of the proposed filter. Good agreement between simulated and measured characteristics of the fabricated filter prototype is achieved.

  8. Blocking Filters with Enhanced Throughput for X-Ray Microcalorimetry

    NASA Technical Reports Server (NTRS)

    Grove, David; Betcher, Jacob; Hagen, Mark

    2012-01-01

    New and improved blocking filters (see figure) have been developed for microcalorimeters on several mission payloads, made of high-transmission polyimide support mesh, that can replace the nickel mesh used in previous blocking filter flight designs. To realize the resolution and signal sensitivity of today s x-ray microcalorimeters, significant improvements in the blocking filter stack are needed. Using high-transmission polyimide support mesh, it is possible to improve overall throughput on a typical microcalorimeter such as Suzaku s X-ray Spectrometer by 11%, compared to previous flight designs. Using polyimide to replace standard metal mesh means the mesh will be transparent to energies 3 keV and higher. Incorporating polyimide s advantageous strength-to-weight ratio, thermal stability, and transmission characteristics permits thinner filter materials, significantly enhancing through - put. A prototype contamination blocking filter for ASTRO-H has passed QT-level acoustic testing. Resistive traces can also be incorporated to provide decontamination capability to actively restore filter performance in orbit.

  9. Reduced size dual band pass filters for RFID applications with excellent bandpass/bandstop characteristics

    NASA Astrophysics Data System (ADS)

    Abdalla, M. A.; Choudhary, D. Kumar; Chaudhary, R. Kumar

    2018-02-01

    This paper presents the design of two reduced size dual-band metamaterial bandpass filters and its simulation followed by measurements of proposed filters. These filters are supporting different frequency bands and primarily could be utilize in radio frequency identification (RFID) application. The filter includes three cells in which two are symmetrical and both inductively coupled with the third cell which is present in between them. In the proposed designs, three different metamaterial composite right/left handed (CRLH) cell resonators have been analysed for compactness. The CRLH cell consists of an interdigital capacitor, a stub/meander line/spiral inductor and a via to connect the top of the structure and ground plane. Finally, the proposed dual band bandpass filters (using meander line and spiral inductor) are showing size reduction by 65% and 50% (with 25% operating frequency reduction), respectively, in comparison with reference filter using stub inductor. More than 30 dB attenuation has been achieved between the two passbands.

  10. Correlation of Electric Field and Critical Design Parameters for Ferroelectric Tunable Microwave Filters

    NASA Technical Reports Server (NTRS)

    Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy

    2000-01-01

    The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.

  11. Dense grid sibling frames with linear phase filters

    NASA Astrophysics Data System (ADS)

    Abdelnour, Farras

    2013-09-01

    We introduce new 5-band dyadic sibling frames with dense time-frequency grid. Given a lowpass filter satisfying certain conditions, the remaining filters are obtained using spectral factorization. The analysis and synthesis filterbanks share the same lowpass and bandpass filters but have different and oversampled highpass filters. This leads to wavelets approximating shift-invariance. The filters are FIR, have linear phase, and the resulting wavelets have vanishing moments. The filters are designed using spectral factorization method. The proposed method leads to smooth limit functions with higher approximation order, and computationally stable filterbanks.

  12. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  13. Waveguide bandpass filter with easily adjustable transmission zeros and 3-dB bandwidth

    NASA Astrophysics Data System (ADS)

    Bage, Amit; Das, Sushrut; Murmu, Lakhindar; Pattapu, Udayabhaskar; Biswal, Sonika

    2018-07-01

    This paper presents a compact waveguide bandpass filter with adjustable transmission zeros (TZs) and bandwidth. The design provides the flexibility to place the TZs at the desired locations for better interference rejection. To demonstrate, initially a three-pole bandpass filter has been designed by placing three single slot resonator structures inside a WR-90 waveguide. Next, two additional asymmetrical slot structures have been used with each of the above resonators to generate two TZs, one on each side of the passband. Since three resonators were used, this process results in six asymmetric slot structures those results in six TZs. The final filter operates at 9.98 GHz with a 3-dB bandwidth of 1.02 GHz and TZs at 8.23/8.70/9.16/10.9/11.6 and 13.115 GHz. Equivalent circuits and necessary design equations have been provided. To validate the simulation, the proposed filter has been fabricated and measured. The measured data show good agreement with simulated data.

  14. Very long stripe-filters for a multispectral detector

    NASA Astrophysics Data System (ADS)

    Laubier, D.; Mercier Ythier, Renaud

    2017-11-01

    In order to simplify instrument design, a new linear area CCD sensor has been developed under CNES responsibility. This detector has four lines 6000 13-μm square pixels long with four stripe filters, one in front of each of them. The detector itself was manufactured and mounted by ATMEL, and the filters were made by SAGEM/REOSC. Assembly was done in two ways, one by ATMEL, the other by SESO. CNES was responsible for the overall design and mechanical/optical interfaces. This paper reports the optical part of this work, including filters placement strategy and line spacing. It will be shown how these two features are closely linked to straylight performance. First, a trade-off study was conducted between several concepts: the results of this study will be presented, as well as the filter design and manufacturing results. They show good transmission and excellent rejection. Final performance of the complete prototypes has been measured, and it will be compared to theoretical models.

  15. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.

  16. An acoustic filter based on layered structure

    PubMed Central

    Steer, Michael B.

    2015-01-01

    Acoustic filters (AFs) are key components to control wave propagation in multi-frequency systems. We present a design which selectively achieves acoustic filtering with a stop band and passive amplification at the high- and low-frequencies, respectively. Measurement results from the prototypes closely match the design predictions. The AF suppresses the high frequency aliasing echo by 14.5 dB and amplifies the low frequency transmission by 8.0 dB, increasing an axial resolution from 416 to 86 μm in imaging. The AF design approach is proved to be effective in multi-frequency systems. PMID:25829548

  17. Design and fabrication of far ultraviolet filters based on π-multilayer technology in high-k materials

    PubMed Central

    Wang, Xiao-Dong; Chen, Bo; Wang, Hai-Feng; He, Fei; Zheng, Xin; He, Ling-Ping; Chen, Bin; Liu, Shi-Jie; Cui, Zhong-Xu; Yang, Xiao-Hu; Li, Yun-Peng

    2015-01-01

    Application of π-multilayer technology is extended to high extinction coefficient materials, which is introduced into metal-dielectric filter design. Metal materials often have high extinction coefficients in far ultraviolet (FUV) region, so optical thickness of metal materials should be smaller than that of the dielectric material. A broadband FUV filter of 9-layer non-periodic Al/MgF2 multilayer was successfully designed and fabricated and it shows high reflectance in 140–180 nm, suppressed reflectance in 120–137 nm and 181–220 nm. PMID:25687255

  18. Design of a broadband band-pass filter with notch-band using new models of coupled transmission lines.

    PubMed

    Daryasafar, Navid; Baghbani, Somaye; Moghaddasi, Mohammad Naser; Sadeghzade, Ramezanali

    2014-01-01

    We intend to design a broadband band-pass filter with notch-band, which uses coupled transmission lines in the structure, using new models of coupled transmission lines. In order to realize and present the new model, first, previous models will be simulated in the ADS program. Then, according to the change of their equations and consequently change of basic parameters of these models, optimization and dependency among these parameters and also their frequency response are attended and results of these changes in order to design a new filter are converged.

  19. Nonlinear multilayers as optical limiters

    NASA Astrophysics Data System (ADS)

    Turner-Valle, Jennifer Anne

    1998-10-01

    In this work we present a non-iterative technique for computing the steady-state optical properties of nonlinear multilayers and we examine nonlinear multilayer designs for optical limiters. Optical limiters are filters with intensity-dependent transmission designed to curtail the transmission of incident light above a threshold irradiance value in order to protect optical sensors from damage due to intense light. Thin film multilayers composed of nonlinear materials exhibiting an intensity-dependent refractive index are used as the basis for optical limiter designs in order to enhance the nonlinear filter response by magnifying the electric field in the nonlinear materials through interference effects. The nonlinear multilayer designs considered in this work are based on linear optical interference filter designs which are selected for their spectral properties and electric field distributions. Quarter wave stacks and cavity filters are examined for their suitability as sensor protectors and their manufacturability. The underlying non-iterative technique used to calculate the optical response of these filters derives from recognizing that the multi-valued calculation of output irradiance as a function of incident irradiance may be turned into a single-valued calculation of incident irradiance as a function of output irradiance. Finally, the benefits and drawbacks of using nonlinear multilayer for optical limiting are examined and future research directions are proposed.

  20. Design and implement of pack filter module base on embedded firewall

    NASA Astrophysics Data System (ADS)

    Tian, Libo; Wang, Chen; Yang, Shunbo

    2011-10-01

    In the traditional security solution conditions, software firewall cannot intercept and respond the invasion before being attacked. And because of the high cost, the hardware firewall does not apply to the security strategy of the end nodes, so we have designed a kind of solution of embedded firewall with hardware and software. With ARM embedding Linux operating system, we have designed packet filter module and intrusion detection module to implement the basic function of firewall. Experiments and results show that that firewall has the advantages of low cost, high processing speed, high safety and the application of the computer terminals. This paper focuses on packet filtering module design and implementation.

  1. Metal removal efficiency, operational life and secondary environmental impacts of a stormwater filter developed from iron-oxide-amended bottom ash.

    PubMed

    Ilyas, Aamir; Muthanna, Tone M

    2017-12-06

    The aim of this paper was to conduct pilot-scale column tests on an alternative treatment filter designed for the treatment of highway stormwater in cold climates. The study evaluated adsorption performance of the filter with regard to the four most commonly found metals (Cu, Ni, Pb, and Zn) in highway stormwater. An alternative method was used to estimate the operational life of the filter from the adsorption test data without a breakthrough under high hydraulic loads. The potential environmental impact of the filter was assessed by comparing desorption test data with four different environmental quality standards. The proposed filter achieved high adsorption (over 90%) of the target metals. The comparisons of desorption and leaching data with the environmental standards indicated that iron-oxide/bottom ash was non-hazardous, reusable and without serious environmental risks. The operational life and filter dimensions were highly dependent on rainfall depth, which indicated that the filter design would have to be adapted to suit the climate. To fully appreciate the performance and environmental aspects, the filter unit should be tested in the field and the testing should explicitly include ecotoxicological and life cycle impacts.

  2. [Design Method Analysis and Performance Comparison of Wall Filter for Ultrasound Color Flow Imaging].

    PubMed

    Wang, Lutao; Xiao, Jun; Chai, Hua

    2015-08-01

    The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.

  3. An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking

    NASA Astrophysics Data System (ADS)

    Raihan A. V, Dilshad; Chakravorty, Suman

    2018-03-01

    Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.

  4. Evidence of Temporal Postdischarge Decontamination of Bacteria by Gliding Electric Discharges: Application to Hafnia alvei▿

    PubMed Central

    Kamgang-Youbi, Georges; Herry, Jean-Marie; Bellon-Fontaine, Marie-Noëlle; Brisset, Jean-Louis; Doubla, Avaly; Naïtali, Murielle

    2007-01-01

    This study aimed to characterize the bacterium-destroying properties of a gliding arc plasma device during electric discharges and also under temporal postdischarge conditions (i.e., when the discharge was switched off). This phenomenon was reported for the first time in the literature in the case of the plasma destruction of microorganisms. When cells of a model bacterium, Hafnia alvei, were exposed to electric discharges, followed or not followed by temporal postdischarges, the survival curves exhibited a shoulder and then log-linear decay. These destruction kinetics were modeled using GinaFiT, a freeware tool to assess microbial survival curves, and adjustment parameters were determined. The efficiency of postdischarge treatments was clearly affected by the discharge time (t*); both the shoulder length and the inactivation rate kmax were linearly modified as a function of t*. Nevertheless, all conditions tested (t* ranging from 2 to 5 min) made it possible to achieve an abatement of at least 7 decimal logarithm units. Postdischarge treatment was also efficient against bacteria not subjected to direct discharge, and the disinfecting properties of “plasma-activated water” were dependent on the treatment time for the solution. Water treated with plasma for 2 min achieved a 3.7-decimal-logarithm-unit reduction in 20 min after application to cells, and abatement greater than 7 decimal logarithm units resulted from the same contact time with water activated with plasma for 10 min. These disinfecting properties were maintained during storage of activated water for 30 min. After that, they declined as the storage time increased. PMID:17557841

  5. Robust Fuzzy Controllers Using FPGAs

    NASA Technical Reports Server (NTRS)

    Monroe, Author Gene S., Jr.

    2007-01-01

    Electro-mechanical device controllers typically come in one of three forms, proportional (P), Proportional Derivative (PD), and Proportional Integral Derivative (PID). Two methods of control are discussed in this paper; they are (1) the classical technique that requires an in-depth mathematical use of poles and zeros, and (2) the fuzzy logic (FL) technique that is similar to the way humans think and make decisions. FL controllers are used in multiple industries; examples include control engineering, computer vision, pattern recognition, statistics, and data analysis. Presented is a study on the development of a PD motor controller written in very high speed hardware description language (VHDL), and implemented in FL. Four distinct abstractions compose the FL controller, they are the fuzzifier, the rule-base, the fuzzy inference system (FIS), and the defuzzifier. FL is similar to, but different from, Boolean logic; where the output value may be equal to 0 or 1, but it could also be equal to any decimal value between them. This controller is unique because of its VHDL implementation, which uses integer mathematics. To compensate for VHDL's inability to synthesis floating point numbers, a scale factor equal to 10(sup (N/4) is utilized; where N is equal to data word size. The scaling factor shifts the decimal digits to the left of the decimal point for increased precision. PD controllers are ideal for use with servo motors, where position control is effective. This paper discusses control methods for motion-base platforms where a constant velocity equivalent to a spectral resolution of 0.25 cm(exp -1) is required; however, the control capability of this controller extends to various other platforms.

  6. A Model for Determining the Effect of the Wind Velocity on 100 m Sprinting Performance.

    PubMed

    Janjic, Natasa; Kapor, Darko; Doder, Dragan; Petrovic, Aleksandar; Doder, Radoslava

    2017-06-01

    This paper introduces an equation for determining instantaneous and final velocity of a sprinter in a 100 m run completed with a wind resistance ranging from 0.1 to 4.5 m/s. The validity of the equation was verified using the data of three world class sprinters: Carl Lewis, Maurice Green, and Usain Bolt. For the given constant wind velocity with the values + 0.9 and + 1.1 m/s, the wind contribution to the change of sprinter velocity was the same for the maximum as well as for the final velocity. This study assessed how the effect of the wind velocity influenced the change of sprinting velocity. The analysis led to the conclusion that the official limit of safely neglecting the wind influence could be chosen as 1 m/s instead of 2 m/s, if the velocity were presented using three, instead of two decimal digits. This implies that wind velocity should be rounded off to two decimal places instead of the present practice of one decimal place. In particular, the results indicated that the influence of wind on the change of sprinting velocity in the range of up to 2 m/s and was of order of magnitude of 10 -3 m/s. This proves that the IAAF Competition Rules correctly neglect the influence of the wind with regard to such velocities. However, for the wind velocity over 2 m/s, the wind influence is of order 10 -2 m/s and cannot be neglected.

  7. A Model for Determining the Effect of the Wind Velocity on 100 m Sprinting Performance

    PubMed Central

    Janjic, Natasa; Kapor, Darko; Doder, Dragan; Petrovic, Aleksandar; Doder, Radoslava

    2017-01-01

    Abstract This paper introduces an equation for determining instantaneous and final velocity of a sprinter in a 100 m run completed with a wind resistance ranging from 0.1 to 4.5 m/s. The validity of the equation was verified using the data of three world class sprinters: Carl Lewis, Maurice Green, and Usain Bolt. For the given constant wind velocity with the values + 0.9 and + 1.1 m/s, the wind contribution to the change of sprinter velocity was the same for the maximum as well as for the final velocity. This study assessed how the effect of the wind velocity influenced the change of sprinting velocity. The analysis led to the conclusion that the official limit of safely neglecting the wind influence could be chosen as 1 m/s instead of 2 m/s, if the velocity were presented using three, instead of two decimal digits. This implies that wind velocity should be rounded off to two decimal places instead of the present practice of one decimal place. In particular, the results indicated that the influence of wind on the change of sprinting velocity in the range of up to 2 m/s and was of order of magnitude of 10-3 m/s. This proves that the IAAF Competition Rules correctly neglect the influence of the wind with regard to such velocities. However, for the wind velocity over 2 m/s, the wind influence is of order 10-2 m/s and cannot be neglected. PMID:28713468

  8. Media filter drain : modified design evaluation.

    DOT National Transportation Integrated Search

    2014-09-01

    The media filter drain (MFD), a stormwater water quality treatment best management practice, consists of media made up of : aggregate, perlite, gypsum and dolomite in a trench located along roadway shoulders with gravel and vegetative pre-filtering f...

  9. Design of multi-wavelength tunable filter based on Lithium Niobate

    NASA Astrophysics Data System (ADS)

    Zhang, Ailing; Yao, Yuan; Zhang, Yue; Song, Hongyun

    2018-05-01

    A multi-wavelength tunable filter is designed. It consists of multiple waveguides among multiple waveguide gratings. A pair of electrodes were placed on both sides of each waveguide. The tunable filter uses the electro-optic effect of Lithium Niobate to tune the phase caused by each waveguide. Consequently, the wavelength and wavelength spacing of the filter are tuned by changing external voltages added on the electrode pairs. The tunable property of the filter is analyzed by phase matching condition and transfer-matrix method. Numerical results show that not only multiple wavelengths with narrow bandwidth are tuned with nearly equal spacing by synchronously changing the voltages added on all electrode pairs, but also the number of wavelengths is determined by the number of phase shifts caused by electrode pairs. Furthermore, due to the electro-optic effect of Lithium Niobate, the tuning speed of the filter can reach the order of ns.

  10. Design and analysis of planar spiral resonator bandstop filter for microwave frequency

    NASA Astrophysics Data System (ADS)

    Motakabber, S. M. A.; Shaifudin Suharsono, Muhammad

    2017-11-01

    In microwave frequency, a spiral resonator can act as either frequency reject or acceptor circuits. A planar logarithmic spiral resonator bandstop filter has been developed based on this property. This project focuses on the rejection property of the spiral resonator. The performance analysis of the exhibited filter circuit has been performed by using scattering parameters (S-parameters) technique in the ultra-wideband microwave frequency. The proposed filter is built, simulated and S-parameters analysis have been accomplished by using electromagnetic simulation software CST microwave studio. The commercial microwave substrate Taconic TLX-8 has been used to build this filter. Experimental results showed that the -10 dB rejection bandwidth of the filter is 2.32 GHz and central frequency is 5.72 GHz which is suitable for ultra-wideband applications. The proposed design has been full of good compliance with the simulated and experimental results here.

  11. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  12. Microprocessor realizations of range rate filters

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The performance of five digital range rate filters is evaluated. A range rate filter receives an input of range data from a radar unit and produces an output of smoothed range data and its estimated derivative range rate. The filters are compared through simulation on an IBM 370. Two of the filter designs are implemented on a 6800 microprocessor-based system. Comparisons are made on the bases of noise variance reduction ratios and convergence times of the filters in response to simulated range signals.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arndt, T.E., Fluor Daniel Hanford

    A previous evaluation documented in report WHC-SD-GN-RPT-30005, Rev. 0, titled ``Evaluation on Self-Contained High Efficiency Particulate Filters,`` revealed that the SCHEPA filters do not have required documentation to be in compliance with the design, testing, and fabrication standards required in ASME N-509, ASME N-510, and MIL-F-51068. These standards are required by DOE Order 6430.IA. Without this documentation, filter adequacy cannot be verified. The existing SCHEPA filters can be removed and replaced with new filters and filter housing which meet current codes and standards.

  14. An Intrinsically Switchable Ladder-Type Ferroelectric BST-on-Si Composite FBAR Filter.

    PubMed

    Lee, Seungku; Mortazawi, Amir

    2016-03-01

    This paper presents a ladder-type bulk acoustic wave (BAW) intrinsically switchable filter based on ferroelectric thin-film bulk acoustic resonators (FBARs). The switchable filter can be turned on and off by the application of an external bias voltage due to the electrostrictive effect in thin-film ferroelectrics. In this paper, Barium Strontium Titanate (BST) is used as the ferroelectric material. A systematic design approach for switchable ladder-type ferroelectric filters is provided based on required filter specifications. A switchable filter is implemented in the form of a BST-on-Si composite structure to control the effective electromechanical coupling coefficient of FBARs. As an experimental verification, a 2.5-stage intrinsically switchable BST-on-Si composite FBAR filter is designed, fabricated, and measured. Measurement results for a typical BST-on-Si composite FBAR show a resonator mechanical quality factor (Q(m)) of 971, as well as a (Q(m)) × f of 2423 GHz. The filter presented here provides a measured insertion loss of 7.8 dB, out-of-band rejection of 26 dB, and fractional bandwidth of 0.33% at 2.5827 GHz when the filter is in the on state at a dc bias of 40 V. In its off state, the filter exhibits an isolation of 31 dB.

  15. Drafting: Current Trends and Future Practices

    ERIC Educational Resources Information Center

    Jensen, C.

    1976-01-01

    Various research findings are reported on drafting trends which the author feels should be incorporated into teaching drafting: (1) true position and geometric tolerancing, (2) decimal and metric dimensioning, (3) functional drafting, (4) automated drafting, and (5) drawing reproductions. (BP)

  16. Mystery #5 Answer

    Atmospheric Science Data Center

    2013-04-22

    ... the questions are provided. 1.   There are no endemic species of cactus on any of the islands. Answer: FALSE. Endemic ... on this island. 6.   Several plant species are endangered due to decimation by goats and competition with non-native ...

  17. 40 CFR 60.424 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to conduct the run, liter/min. B=acid density (a function of acid strength and temperature), g/cc. C=acid strength, decimal fraction. K1/4=conversion factor, 0.0808 (Mg-min-cc)/(g-hr-liter) [0.0891 (ton...

  18. 40 CFR 60.424 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to conduct the run, liter/min. B=acid density (a function of acid strength and temperature), g/cc. C=acid strength, decimal fraction. K1/4=conversion factor, 0.0808 (Mg-min-cc)/(g-hr-liter) [0.0891 (ton...

  19. 40 CFR 60.424 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to conduct the run, liter/min. B=acid density (a function of acid strength and temperature), g/cc. C=acid strength, decimal fraction. K1/4=conversion factor, 0.0808 (Mg-min-cc)/(g-hr-liter) [0.0891 (ton...

  20. Refractive Outcomes of 20 Eyes Undergoing ICL Implantation for Correction of Hyperopic Astigmatism.

    PubMed

    Coskunseven, Efekan; Kavadarli, Isilay; Sahin, Onurcan; Kayhan, Belma; Pallikaris, Ioannis

    2017-09-01

    To analyze 1-week, 1-month, and 12-month postoperative refractive outcomes of eyes that under-went ICL implantation to correct hyperopic astigmatism. The study enrolled 20 eyes of patients with an average age of 32 years (range: 21 to 40 years). The outcomes of spherical and cylindrical refraction, uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), vault, and angle parameters were evaluated 1 week, 1 month, and 12 months postoperatively. The preoperative mean UDVA was 0.15 ± 0.11 (decimal) (20/133 Snellen) and increased to 0.74 ± 0.25 (20/27 Snellen) postoperatively, with a change of 0.59 (decimal) (20/33.9 Snellen) (P < .0001), which was statistically significant. The preoperative mean CDVA was 0.74 ± 0.25 (decimal) (20/27 Snellen) and increased to 0.78 ± 0.21 (20/25 Snellen), with a change of 0.03 (decimal) (20/666 Snellen) (P < .052), which was not statistically significant. The mean preoperative sphere was 6.86 ± 1.77 diopters (D) and the mean preoperative cylinder was -1.44 ± 0.88 D. The mean 12-month postoperative sphere decreased to 0.46 ± 0.89 D (P < .001) and cylinder decreased to -0.61 ± 0.46 D (P < .001), with a change of 6.40 D, both of which were statistically significant. The mean 1-month postoperative vault was 0.65 ± 0.13 mm and decreased to 0.613 ± 0.10 mm at 1 year postoperatively, with a change of 0.44 mm (P < .003). The preoperative/12-month and 1-month/12-month trabecular-iris angle (TIA), trabecular-iris space area 500 mm from the scleral spur (TISA500), and angle opening distance 500 mm from the scleral spur (AOD500) values were analyzed nasally, temporally, and inferiorly. All differences were statistically significant between preoperative/12-month analysis. The only differences between 1- and 12-month analysis were on TISA500 inferior (P < .002) and AOD500 nasal (0.031) values. ICL hyperopic toric implantation is a safe method and provides stable refractive outcomes in patients with high hyperopia (up to 10.00 D) and astigmatism (up to 6.00 D). [J Refract Surg. 2017;33(9):604-609.]. Copyright 2017, SLACK Incorporated.

  1. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.

  2. Designing a web site for high school geoscience teaching in Iceland

    NASA Astrophysics Data System (ADS)

    Douglas, George R.

    1998-08-01

    The need to construct an earth science teaching site on the web prompted a survey of existing sites which, in spite of containing much of value, revealed many weaknesses in basic design, particularly as regards the organisation of links to information resources. Few web sites take into consideration the particular pedagogic needs of the high school science student and there has, as yet, been little serious attempt to exploit and organise the more outstanding advantages offered by the internet to science teaching, such as accessing real-time data. A web site has been constructed which, through basic design, enables students to access relevant information resources over a wide range of subjects and topics easily and rapidly, while at the same time performing an instructional role in how to handle both on-line and off-line resources. Key elements in the design are selection and monitoring by the teacher, task oriented pages and the use of the Dewey decimal classification system. The intention is to increase gradually the extent to which most teaching tasks are carried out via the web pages, in the belief that they can become an efficient central point for all the earth science curriculum.

  3. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    PubMed

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.

  4. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers

    PubMed Central

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-01-01

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581

  5. Flat-Passband 3 × 3 Interleaving Filter Designed With Optical Directional Couplers in Lattice Structure

    NASA Astrophysics Data System (ADS)

    Wang, Qi Jie; Zhang, Ying; Soh, Yeng Chai

    2005-12-01

    This paper presents a novel lattice optical delay-line circuit using 3 × 3 directional couplers to implement three-port optical interleaving filters. It is shown that the proposed circuit can deliver three channels of 2pi/3 phase-shifted interleaving transmission spectra if the coupling ratios of the last two directional couplers are selected appropriately. The other performance requirements of an optical interleaver can be achieved by designing the remaining part of the lattice circuit. A recursive synthesis design algorithm is developed to calculate the design parameters of the lattice circuit that will yield the desired filter response. As illustrative examples, interleavers with maximally flat-top passband transmission and with given transmission performance on passband ripples and passband bandwidth, respectively, are designed to verify the effectiveness of the proposed design scheme.

  6. Design and implementation of an audio indicator

    NASA Astrophysics Data System (ADS)

    Zheng, Shiyong; Li, Zhao; Li, Biqing

    2017-04-01

    This page proposed an audio indicator which designed by using C9014, LED by operational amplifier level indicator, the decimal count/distributor of CD4017. The experimental can control audibly neon and holiday lights through the signal. Input audio signal after C9014 composed of operational amplifier for power amplifier, the adjust potentiometer extraction amplification signal input voltage CD4017 distributors make its drive to count, then connect the LED display running situation of the circuit. This simple audio indicator just use only U1 and can produce two colors LED with the audio signal tandem come pursuit of the running effect, from LED display the running of the situation takes can understand the general audio signal. The variation in the audio and the frequency of the signal and the corresponding level size. In this light can achieve jump to change, slowly, atlas, lighting four forms, used in home, hotel, discos, theater, advertising and other fields, and a wide range of USES, rU1h life in a modern society.

  7. Linear variable narrow bandpass optical filters in the far infrared (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rahmlow, Thomas D.

    2017-06-01

    We are currently developing linear variable filters (LVF) with very high wavelength gradients. In the visible, these filters have a wavelength gradient of 50 to 100 nm/mm. In the infrared, the wavelength gradient covers the range of 500 to 900 microns/mm. Filter designs include band pass, long pass and ulta-high performance anti-reflection coatings. The active area of the filters is on the order of 5 to 30 mm along the wavelength gradient and up to 30 mm in the orthogonal, constant wavelength direction. Variation in performance along the constant direction is less than 1%. Repeatable performance from filter to filter, absolute placement of the filter relative to a substrate fiducial and, high in-band transmission across the full spectral band is demonstrated. Applications include order sorting filters, direct replacement of the spectrometer and hyper-spectral imaging. Off-band rejection with an optical density of greater than 3 allows use of the filter as an order sorting filter. The linear variable order sorting filters replaces other filter types such as block filters. The disadvantage of block filters is the loss of pixels due to the transition between filter blocks. The LVF is a continuous gradient without a discrete transition between filter wavelength regions. If the LVF is designed as a narrow band pass filter, it can be used in place of a spectrometer thus reducing overall sensor weight and cost while improving the robustness of the sensor. By controlling the orthogonal performance (smile) the LVF can be sized to the dimensions of the detector. When imaging on to a 2 dimensional array and operating the sensor in a push broom configuration, the LVF spectrometer performs as a hyper-spectral imager. This paper presents performance of LVF fabricated in the far infrared on substrates sized to available detectors. The impact of spot size, F-number and filter characterization are presented. Results are also compared to extended visible LVF filters.

  8. Effect of filter designs on hydraulic properties and well efficiency.

    PubMed

    Kim, Byung-Woo

    2014-09-01

    To analyze the effect of filter pack arrangement on the hydraulic properties and the well efficiency of a well design, a step drawdown was conducted in a sand-filled tank model. Prior to the test, a single filter pack (SFP), granule only, and two dual filter packs (DFPs), type A (granule-pebble) and type B (pebble-granule), were designed to surround the well screen. The hydraulic properties and well efficiencies related to the filter packs were evaluated using the Hazen's, Eden-Hazel's, Jacob's, and Labadie-Helweg's methods. The results showed that the hydraulic properties and well efficiency of the DFPs were higher than those of a SFP, and the clogging effect and wellhead loss related to the aquifer material were the lowest owing to the grain size and the arrangement of the filter pack. The hydraulic conductivity of the DFPs types A and B was about 1.41 and 6.43 times that of a SFP, respectively. In addition, the well efficiency of the DFPs types A and B was about 1.38 and 1.60 times that of the SFP, respectively. In this study, hydraulic property and well efficiency changes were observed according to the variety of the filter pack used. The results differed from the predictions of previous studies on the grain-size ratio. Proper pack-aquifer ratios and filter pack arrangements are primary factors in the construction of efficient water wells, as is the grain ratio, intrinsic permeability (k), and hydraulic conductivity (K) between the grains of the filter packs and the grains of the aquifer. © 2014, National Ground Water Association.

  9. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  10. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  11. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  12. 42 CFR 84.177 - Inhalation and exhalation valves; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... air from adversely affecting filters, except where filters are specifically designed to resist... DEVICES Non-Powered Air-Purifying Particulate Respirators § 84.177 Inhalation and exhalation valves... external influence; and (3) Designed and constructed to prevent inward leakage of contaminated air. ...

  13. 42 CFR 84.177 - Inhalation and exhalation valves; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... air from adversely affecting filters, except where filters are specifically designed to resist... DEVICES Non-Powered Air-Purifying Particulate Respirators § 84.177 Inhalation and exhalation valves... external influence; and (3) Designed and constructed to prevent inward leakage of contaminated air. ...

  14. AgBufferBuilder: A geographic information system (GIS) tool for precision design and performance assessment of filter strips

    Treesearch

    M. G. Dosskey; S. Neelakantan; T. G. Mueller; T. Kellerman; M. J. Helmers; E. Rienzi

    2015-01-01

    Spatially nonuniform runoif reduces the water qua1iry perfortnance of constant- width filter strips. A geographic inlormation system (Gls)-based tool was developed and tested that ernploys terrain analysis to account lor spatially nonuniform runoffand produce more ellbctive filter strip designs.The computer program,AgBufTerBuilder, runs with ATcGIS versions 10.0 and 10...

  15. Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation

    NASA Astrophysics Data System (ADS)

    Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee

    2018-05-01

    In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.

  16. Development of a New Arterial-Line Filter Design Using Computational Fluid Dynamics Analysis

    PubMed Central

    Herbst, Daniel P.; Najm, Hani K.

    2012-01-01

    Abstract: Arterial-line filters used during extracorporeal circulation continue to rely on the physical properties of a wetted micropore and reductions in blood flow velocity to affect air separation from the circulating blood volume. Although problems associated with air embolism during cardiac surgery persist, a number of investigators have concluded that further improvements in filtration are needed to enhance air removal during cardiopulmonary bypass procedures. This article reviews theoretical principles of micropore filter technology and outlines the development of a new arterial-line filter concept using computational fluid dynamics analysis. Manufacturer-supplied data of a micropore screen and experimental results taken from an ex vivo test circuit were used to define the inputs needed for numerical modeling of a new filter design. Flow patterns, pressure distributions, and velocity profiles predicted with computational fluid dynamics softwarewere used to inform decisions on model refinements and how to achieve initial design goals of ≤225 mL prime volume and ≤500 cm2 of screen surface area. Predictions for optimal model geometry included a screen angle of 56° from the horizontal plane with a total surface area of 293.9 cm2 and a priming volume of 192.4 mL. This article describes in brief the developmental process used to advance a new filter design and supports the value of numerical modeling in this undertaking. PMID:23198394

  17. Development of a new arterial-line filter design using computational fluid dynamics analysis.

    PubMed

    Herbst, Daniel P; Najm, Hani K

    2012-09-01

    Arterial-line filters used during extracorporeal circulation continue to rely on the physical properties of a wetted micropore and reductions in blood flow velocity to affect air separation from the circulating blood volume. Although problems associated with air embolism during cardiac surgery persist, a number of investigators have concluded that further improvements in filtration are needed to enhance air removal during cardiopulmonary bypass procedures. This article reviews theoretical principles of micropore filter technology and outlines the development of a new arterial-line filter concept using computational fluid dynamics analysis. Manufacturer-supplied data of a micropore screen and experimental results taken from an ex vivo test circuit were used to define the inputs needed for numerical modeling of a new filter design. Flow patterns, pressure distributions, and velocity profiles predicted with computational fluid dynamics software were used to inform decisions on model refinements and how to achieve initial design goals of < or = 225 mL prime volume and < or = 500 cm2 of screen surface area. Predictions for optimal model geometry included a screen angle of 56 degrees from the horizontal plane with a total surface area of 293.9 cm2 and a priming volume of 192.4 mL. This article describes in brief the developmental process used to advance a new filter design and supports the value of numerical modeling in this undertaking.

  18. An estimator-predictor approach to PLL loop filter design

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Hurd, W. J.

    1986-01-01

    An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.

  19. Technological development of spectral filters for Sentinel-2

    NASA Astrophysics Data System (ADS)

    Schröter, Karin; Schallenberg, Uwe; Mohaupt, Matthias

    2017-11-01

    In the frame of the initiative for Global Monitoring for Environment and Security (GMES), jointly undertaken by the European Commission and the European Space Agency a technological development of two filter assemblies was performed for the Multi- Spectral Instrument (MSI) for Sentinel-2. The multispectral pushbroom imaging of the Earth will be performed in 10 VNIR bands (from 443 nm to 945nm) and 3 SWIR bands (from 1375 nm to 2190 nm). Possible filter coating techniques and masking concepts were considered in the frame of trade-off studies. The selected deposition concept is based on self-blocked all-dielectric multilayer band pass filter. Band pass and blocking characteristic is deposited on the space side of a single filter substrate whereas the detector side of the substrate has an antireflective coating. The space- and detector side masking design is realized by blades integrated in the mechanical parts including the mechanical interface to the filter assembly support on the MSI focal plane. The feasibility and required performance of the VNIR Filter Assembly and SWIR Filter Assembly were successfully demonstrated by breadboarding. Extensive performance tests of spectral and optical parameters and environmental tests (radiation, vibration, shock, thermal vacuum cycling, humidity) were performed on filter stripe- and filter assembly level. The presentation will contain a detailed description of the filter assembly design and the results of the performance and environmental tests.

  20. Multilevel filtering elliptic preconditioners

    NASA Technical Reports Server (NTRS)

    Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles

    1989-01-01

    A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.

Top