A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
ERIC Educational Resources Information Center
Glazerman, Steven; Max, Jeffrey
2011-01-01
This appendix describes the methods and provides further detail to support the evaluation brief, "Do Low-Income Students Have Equal Access to the Highest-Performing Teachers?" (Contains 8 figures, 6 tables and 5 footnotes.) [For the main report, "Do Low-Income Students Have Equal Access to the Highest-Performing Teachers? NCEE…
Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.
Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N
2012-01-16
We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
2013-01-01
Background The high variations of background luminance, low contrast and excessively enhanced contrast of hand bone radiograph often impede the bone age assessment rating system in evaluating the degree of epiphyseal plates and ossification centers development. The Global Histogram equalization (GHE) has been the most frequently adopted image contrast enhancement technique but the performance is not satisfying. A brightness and detail preserving histogram equalization method with good contrast enhancement effect has been a goal of much recent research in histogram equalization. Nevertheless, producing a well-balanced histogram equalized radiograph in terms of its brightness preservation, detail preservation and contrast enhancement is deemed to be a daunting task. Method In this paper, we propose a novel framework of histogram equalization with the aim of taking several desirable properties into account, namely the Multipurpose Beta Optimized Bi-Histogram Equalization (MBOBHE). This method performs the histogram optimization separately in both sub-histograms after the segmentation of histogram using an optimized separating point determined based on the regularization function constituted by three components. The result is then assessed by the qualitative and quantitative analysis to evaluate the essential aspects of histogram equalized image using a total of 160 hand radiographs that are implemented in testing and analyses which are acquired from hand bone online database. Result From the qualitative analysis, we found that basic bi-histogram equalizations are not capable of displaying the small features in image due to incorrect selection of separating point by focusing on only certain metric without considering the contrast enhancement and detail preservation. From the quantitative analysis, we found that MBOBHE correlates well with human visual perception, and this improvement shortens the evaluation time taken by inspector in assessing the bone age. Conclusions The proposed MBOBHE outperforms other existing methods regarding comprehensive performance of histogram equalization. All the features which are pertinent to bone age assessment are more protruding relative to other methods; this has shorten the required evaluation time in manual bone age assessment using TW method. While the accuracy remains unaffected or slightly better than using unprocessed original image. The holistic properties in terms of brightness preservation, detail preservation and contrast enhancement are simultaneous taken into consideration and thus the visual effect is contributive to manual inspection. PMID:23565999
Time Reversal Acoustic Communication Using Filtered Multitone Modulation
Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo
2015-01-01
The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process. PMID:26393586
Time Reversal Acoustic Communication Using Filtered Multitone Modulation.
Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo
2015-09-17
The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process.
Equalizing secondary path effects using the periodicity of fMRI acoustic noise.
Kannan, Govind; Milani, Ali A; Panahi, Issa; Briggs, Richard
2008-01-01
Non-minimum phase secondary path has a direct effect on achieving a desired noise attenuation level in active noise control (ANC) systems. The adaptive noise canceling filter is often a causal FIR filter which may not be able to sufficiently equalize the effect of a non-minimum phase secondary path, since in theory only a non-causal filter can equalize it. However a non-causal stable filter can be found to equalize the non-minimum phase effect of secondary path. Realization of non-causal stable filters requires knowledge of future values of input signal. In this paper we develop methods for equalizing the non-minimum phase property of the secondary path and improving the performance of an ANC system by exploiting the periodicity of fMRI acoustic noise. It has been shown that the scanner noise component is highly periodic and hence predictable which enables easy realization of non-causal filtering. Improvement in performance due to the proposed methods (with and without the equalizer) is shown for periodic fMRI acoustic noise.
Differential phase-shift keying and channel equalization in free space optical communication system
NASA Astrophysics Data System (ADS)
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Wan, Xiongfeng; Xu, Chenlu
2018-01-01
We present the performance benefits of differential phase-shift keying (DPSK) modulation in eliminating influence from atmospheric turbulence, especially for coherent free space optical (FSO) communication with a high communication rate. Analytic expression of detected signal is derived, based on which, homodyne detection efficiency is calculated to indicate the performance of wavefront compensation. Considered laser pulses always suffer from atmospheric scattering effect by clouds, intersymbol interference (ISI) in high-speed FSO communication link is analyzed. Correspondingly, the channel equalization method of a binormalized modified constant modulus algorithm based on set-membership filtering (SM-BNMCMA) is proposed to solve the ISI problem. Finally, through the comparison with existing channel equalization methods, its performance benefits of both ISI elimination and convergence speed are verified. The research findings have theoretical significance in a high-speed FSO communication system.
Decision feedback equalizer for holographic data storage.
Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo
2018-05-20
Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.
1974-01-01
REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans
Code of Federal Regulations, 2013 CFR
2013-07-01
... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...
Code of Federal Regulations, 2011 CFR
2011-07-01
... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...
Code of Federal Regulations, 2012 CFR
2012-07-01
... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...
Code of Federal Regulations, 2014 CFR
2014-07-01
... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...
Code of Federal Regulations, 2010 CFR
2010-07-01
... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...
NASA Astrophysics Data System (ADS)
Zhao, Liang; Ge, Jian-Hua
2012-12-01
Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Electronic Equalization of Multikilometer 10-Gb/s Multimode Fiber Links: Mode-Coupling Effects
NASA Astrophysics Data System (ADS)
Balemarthy, Kasyapa; Polley, Arup; Ralph, Stephen E.
2006-12-01
This paper investigates the ability of electronic equalization to compensate for modal dispersion in the presence of mode coupling in multimode fibers (MMFs) at 10 Gb/s. Using a new time-domain experimental method, mode coupling is quantified in MMF. These results, together with a comprehensive link model, allow to determine the impact of mode coupling on the performance of MMF. The equalizer performance on links from 300 m to 8 km is quantified with and without modal coupling. It is shown that the mode-coupling effects are influenced by the specific index profile and increase the equalizer penalty by as much as 1 dBo for 1-km links and 2.3 dBo for 2-km links when using a standard model of fiber profiles at 1310 nm.
Power-output regularization in global sound equalization.
Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn
2008-01-01
The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.
Bai, Neng; Xia, Cen; Li, Guifang
2012-10-08
We propose and experimentally demonstrate single-carrier adaptive frequency-domain equalization (SC-FDE) to mitigate multipath interference (MPI) for the transmission of the fundamental mode in a few-mode fiber. The FDE approach reduces computational complexity significantly compared to the time-domain equalization (TDE) approach while maintaining the same performance. Both FDE and TDE methods are evaluated by simulating long-haul fundamental-mode transmission using a few-mode fiber. For the fundamental mode operation, the required tap length of the equalizer depends on the differential mode group delay (DMGD) of a single span rather than DMGD of the entire link.
Equalization for a page-oriented optical memory system
NASA Astrophysics Data System (ADS)
Trelewicz, Jennifer Q.; Capone, Jeffrey
1999-11-01
In this work, a method of decision-feedback equalization is developed for a digital holographic channel that experiences moderate-to-severe imaging errors. Decision feedback is utilized, not only where the channel is well-behaved, but also near the edges of the camera grid that are subject to a high degree of imaging error. In addition to these effects, the channel is worsened by typical problems of holographic channels, including non-uniform illumination, dropouts, and stuck bits. The approach described in this paper builds on established methods for performing trained and blind equalization on time-varying channels. The approach is tested on experimental data sets. On most of these data sets, the method of equalization described in this work delivers at least an order of magnitude improvement in bit-error rate (BER) before error-correction coding (ECC). When ECC is introduced, the approach is able to recover stored data with no errors for many of the tested data sets. Furthermore, a low BER was maintained even over a range of small alignment perturbations in the system. It is believed that this equalization method can allow cost reductions to be made in page-memory systems, by allowing for a larger image area per page or less complex imaging components, without sacrificing the low BER required by data storage applications.
29 CFR 1620.17 - Jobs requiring equal responsibility in performance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 4 2013-07-01 2013-07-01 false Jobs requiring equal responsibility in performance. 1620.17... THE EQUAL PAY ACT § 1620.17 Jobs requiring equal responsibility in performance. (a) In general. The equal pay standard applies to jobs the performance of which requires equal responsibility...
29 CFR 1620.17 - Jobs requiring equal responsibility in performance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 4 2014-07-01 2014-07-01 false Jobs requiring equal responsibility in performance. 1620.17... THE EQUAL PAY ACT § 1620.17 Jobs requiring equal responsibility in performance. (a) In general. The equal pay standard applies to jobs the performance of which requires equal responsibility...
Joint polarization tracking and channel equalization based on radius-directed linear Kalman filter
NASA Astrophysics Data System (ADS)
Zhang, Qun; Yang, Yanfu; Zhong, Kangping; Liu, Jie; Wu, Xiong; Yao, Yong
2018-01-01
We propose a joint polarization tracking and channel equalization scheme based on radius-directed linear Kalman filter (RD-LKF) by introducing the butterfly finite-impulse-response (FIR) filter in our previously proposed RD-LKF method. Along with the fast polarization tracking, it can also simultaneously compensate the inter-symbol interference (ISI) effects including residual chromatic dispersion and polarization mode dispersion. Compared with the conventional radius-directed equalizer (RDE) algorithm, it is demonstrated experimentally that three times faster convergence speed, one order of magnitude better tracking capability, and better BER performance is obtained in polarization division multiplexing 16 quadrature amplitude modulation system. Besides, the influences of the algorithm parameters on the convergence and the tracking performance are investigated by numerical simulation.
29 CFR 1620.15 - Jobs requiring equal skill in performance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 4 2012-07-01 2012-07-01 false Jobs requiring equal skill in performance. 1620.15 Section... EQUAL PAY ACT § 1620.15 Jobs requiring equal skill in performance. (a) In general. The jobs to which the equal pay standard is applicable are jobs requiring equal skill in their performance. Where the amount...
29 CFR 1620.15 - Jobs requiring equal skill in performance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 4 2013-07-01 2013-07-01 false Jobs requiring equal skill in performance. 1620.15 Section... EQUAL PAY ACT § 1620.15 Jobs requiring equal skill in performance. (a) In general. The jobs to which the equal pay standard is applicable are jobs requiring equal skill in their performance. Where the amount...
29 CFR 1620.15 - Jobs requiring equal skill in performance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 4 2014-07-01 2014-07-01 false Jobs requiring equal skill in performance. 1620.15 Section... EQUAL PAY ACT § 1620.15 Jobs requiring equal skill in performance. (a) In general. The jobs to which the equal pay standard is applicable are jobs requiring equal skill in their performance. Where the amount...
29 CFR 1620.15 - Jobs requiring equal skill in performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Jobs requiring equal skill in performance. 1620.15 Section... EQUAL PAY ACT § 1620.15 Jobs requiring equal skill in performance. (a) In general. The jobs to which the equal pay standard is applicable are jobs requiring equal skill in their performance. Where the amount...
Robust stabilization of the Space Station in the presence of inertia matrix uncertainty
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang; Sunkel, John
1993-01-01
This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.
Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J
2018-05-17
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.
Aronson, A. R.; Bodenreider, O.; Chang, H. F.; Humphrey, S. M.; Mork, J. G.; Nelson, S. J.; Rindflesch, T. C.; Wilbur, W. J.
2000-01-01
The objective of NLM's Indexing Initiative (IND) is to investigate methods whereby automated indexing methods partially or completely substitute for current indexing practices. The project will be considered a success if methods can be designed and implemented that result in retrieval performance that is equal to or better than the retrieval performance of systems based principally on humanly assigned index terms. We describe the current state of the project and discuss our plans for the future. PMID:11079836
Comparison of variance estimators for meta-analysis of instrumental variable estimates
Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F
2016-01-01
Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262
Game of Life on the Equal Degree Random Lattice
NASA Astrophysics Data System (ADS)
Shao, Zhi-Gang; Chen, Tao
2010-12-01
An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
ERIC Educational Resources Information Center
Jones, Paul R.
2011-01-01
Introduction: Two studies examined whether stereotype threat impairs women's math performance and whether concurrent threat reduction strategies can be used to offset this effect. Method: In Study 1, collegiate men and women (N = 100) watched a video purporting that males and females performed equally well ("gender-fair") or males outperformed…
Women Share in Science and Technology Education and Their Job Performance in Nigeria
NASA Astrophysics Data System (ADS)
Osezuah, Simon; Nwadiani, C. O.
2012-10-01
This investigation focused on womenís share in Science and Technology education and their job performance in Nigeria. The investigation was conducted with two questions that were raised as a guide. A sample of 4886 was drawn through the questionnaire method. Analysis of the data was conducted through the use of frequency count. Findings obtained indicated that there was disparity between male and female gender in access to Science and Technology education in Nigeria, and also that there were no differences between women and men scientists and technologists in job performance. The conclusion was therefore reached that women do not have equal share with men in Science and Technology education even though the male and female scientists and technologists perform jobs equally in Nigeria. Recommendation was therefore made accordingly.
Indoor visible light communication with smart lighting technology
NASA Astrophysics Data System (ADS)
Das Barman, Abhirup; Halder, Alak
2017-02-01
An indoor visible-light communication performance is investigated utilizing energy efficient white light by 2D LED arrays. Enabled by recent advances in LED technology, IEEE 802.15.7 standardizes high-data-rate visible light communication and advocates for colour shift keying (CSK) modulation to overcome flicker and to support dimming. Voronoi segmentation is employed for decoding N-CSK constellation which has superior performance compared to other existing decoding methods. The two chief performance degrading effects of inter-symbol interference and LED nonlinearity is jointly mitigated using LMS post equalization at the receiver which improves the symbol error rate performance and increases field of view of the receiver. It is found that LMS post equalization symbol at 250MHz offers 7dB SNR improvement at SER10-6
NASA Technical Reports Server (NTRS)
Hintze, Paul E.
2016-01-01
NASA's Kennedy Space Center has developed two solvent-free precision cleaning techniques: plasma cleaning and supercritical carbon dioxide (SCCO2), that has equal performance, cost parity, and no environmental liability, as compared to existing solvent cleaning methods.
Epidemiologic research using probabilistic outcome definitions.
Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S
2015-01-01
Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.
Detection of bacteriuria and pyuria by URISCREEN a rapid enzymatic screening test.
Pezzlo, M T; Amsterdam, D; Anhalt, J P; Lawrence, T; Stratton, N J; Vetter, E A; Peterson, E M; de la Maza, L M
1992-01-01
A multicenter study was performed to evaluate the ability of the URISCREEN (Analytab Products, Plainview, N.Y.), a 2-min catalase tube test, to detect bacteriuria and pyuria. This test was compared with the Chemstrip LN (BioDynamics, Division of Boehringer Mannheim Diagnostics, Indianapolis, Ind.), a 2-min enzyme dipstick test; a semiquantitative plate culture method was used as the reference test for bacteriuria, and the Gram stain or a quantitative chamber count method was used as the reference test for pyuria. Each test was evaluated for its ability to detect probable pathogens at greater than or equal to 10(2) CFU/ml and/or greater than or equal to 1 leukocyte per oil immersion field, as determined by the Gram stain method, or greater than 10 leukocytes per microliter, as determined by the quantitative count method. A total of 1,500 urine specimens were included in this evaluation. There were 298 specimens with greater than or equal 10(2) CFU/ml and 451 specimens with pyuria. Of the 298 specimens with probable pathogens isolated at various colony counts, 219 specimens had colony counts of greater than or equal to 10(5) CFU/ml, 51 specimens had between 10(4) and 10(5) CFU/ml, and 28 specimens had between 10(2) and less than 10(4) CFU/ml. Both the URISCREEN and the Chemstrip LN detected 93% (204 of 219) of the specimens with probable pathogens at greater than or equal to 10(5) CFU/ml. For the specimens with probable pathogens at greater than or equal to 10(2) CFU/ml, the sensitivities of the URISCREEN and the Chemstrip LN were 86% (256 of 298) and 81% (241 of 298), respectively. Of the 451 specimens with pyuria, the URISCREEN detected 88% (398 of 451) and Chemstrip LN detected 78% (350 if 451). There were 204 specimens with both greater than or equal to 10(2) CFU/ml and pyuria; the sensitivities of both methods were 95% (193 of 204) for these specimens. Overall, there were 545 specimens with probable pathogens at greater than or equal to 10(2) CFU/ml and/or pyuria. The URISCREEN detected 85% (461 of 545), and the Chemstrip LN detected 73% (398 of 545). A majority (76%) of the false-negative results obtained with either method were for specimens without leukocytes in the urine. There were 955 specimens with no probable pathogens or leukocytes. Of these, 28% (270 of 955) were found positive by the URISCREEN and 13% (122 of 955) were found positive by the Chemstrip LN. A majority of the false-positive results were probably due, in part, to the detection of enzymes present in both bacterial and somatic cells by each of the test systems. Overall, the URISCREEN is rapid, manual, easy-to-perform enzymatic test that yields findings similar to those yielded by the Chemstrip LN for specimens with both greater than or equal to 10(2) CFU/ml and pyuria or for specimens with greater than or equal to 10(5) CFU/ml and with or without pyuria. However, when the data were analyzed for either probable pathogens at less 10(5) CFU/ml or pyuria, the sensitivity of the URISCREEN was higher (P less than 0.05). PMID:1551986
Vanderlinden, Karen; Van de Putte, Bart
2017-04-01
Even though breastfeeding is typically considered the preferred feeding method for infants worldwide, in Belgium, breastfeeding rates remain low across native and migrant groups while the underlying determinants are unclear. Furthermore, research examining contextual effects, especially regarding gender (in)equality and ideology, has not been conducted. We hypothesized that greater gender equality scores in the country of origin will result in higher breastfeeding chances. Because gender equality does not operate only at the contextual level but can be mediated through individual level resources, we hypothesized the following for maternal education: higher maternal education will be an important positive predictor for exclusive breastfeeding chances in Belgium, but its effects will differ over subsequent origin countries. Based on IKAROS data (GeÏntegreerd Kind Activiteiten en Regio Ondersteunings Systeem), we perform multilevel analyses on 27 936 newborns. Feeding method is indicated by exclusive breastfeeding 3 months after childbirth. We measure gender (in)equality using Global Gender Gap scores from the mother's origin country. Maternal education is a metric variable based on International Standard Classification of Education indicators. Results show that 3.6% of the variation in breastfeeding can be explained by differences between the migrant mother's country of origin. However, the effect of gender (in)equality appears to be non-significant. After adding maternal education, the effect for origin countries scoring low on gender equality turns significant. Maternal education on its own shows strong positive association with exclusive breastfeeding and, furthermore, has different effects for different origin countries. Possible explanations are discussed in-depth setting direction for further research regarding the different pathways gender (in)equality and maternal education affect breastfeeding. © 2016 John Wiley & Sons Ltd. © 2016 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Xinming; Lai Chaojen; Whitman, Gary J.
Purpose: The scan equalization digital mammography (SEDM) technique combines slot scanning and exposure equalization to improve low-contrast performance of digital mammography in dense tissue areas. In this study, full-field digital mammography (FFDM) images of an anthropomorphic breast phantom acquired with an anti-scatter grid at various exposure levels were superimposed to simulate SEDM images and investigate the improvement of low-contrast performance as quantified by primary signal-to-noise ratios (PSNRs). Methods: We imaged an anthropomorphic breast phantom (Gammex 169 ''Rachel,'' Gammex RMI, Middleton, WI) at various exposure levels using a FFDM system (Senographe 2000D, GE Medical Systems, Milwaukee, WI). The exposure equalization factorsmore » were computed based on a standard FFDM image acquired in the automatic exposure control (AEC) mode. The equalized image was simulated and constructed by superimposing a selected set of FFDM images acquired at 2, 1, 1/2, 1/4, 1/8, 1/16, and 1/32 times of exposure levels to the standard AEC timed technique (125 mAs) using the equalization factors computed for each region. Finally, the equalized image was renormalized regionally with the exposure equalization factors to result in an appearance similar to that with standard digital mammography. Two sets of FFDM images were acquired to allow for two identically, but independently, formed equalized images to be subtracted from each other to estimate the noise levels. Similarly, two identically but independently acquired standard FFDM images were subtracted to estimate the noise levels. Corrections were applied to remove the excess system noise accumulated during image superimposition in forming the equalized image. PSNRs over the compressed area of breast phantom were computed and used to quantitatively study the effects of exposure equalization on low-contrast performance in digital mammography. Results: We found that the highest achievable PSNR improvement factor was 1.89 for the anthropomorphic breast phantom used in this study. The overall PSNRs were measured to be 79.6 for the FFDM imaging and 107.6 for the simulated SEDM imaging on average in the compressed area of breast phantom, resulting in an average improvement of PSNR by {approx}35% with exposure equalization. We also found that the PSNRs appeared to be largely uniform with exposure equalization, and the standard deviations of PSNRs were estimated to be 10.3 and 7.9 for the FFDM imaging and the simulated SEDM imaging, respectively. The average glandular dose for SEDM was estimated to be 212.5 mrad, {approx}34% lower than that of standard AEC-timed FFDM (323.8 mrad) as a result of exposure equalization for the entire breast phantom. Conclusions: Exposure equalization was found to substantially improve image PSNRs in dense tissue regions and result in more uniform image PSNRs. This improvement may lead to better low-contrast performance in detecting and visualizing soft tissue masses and micro-calcifications in dense tissue areas for breast imaging tasks.« less
ERIC Educational Resources Information Center
Potter, Penny F.; Graham-Moore, Brian E.
Most organizations planning to assess adverse impact or perform a stock analysis for affirmative action planning must correctly classify their jobs into appropriate occupational categories. Two methods of job classification were assessed in a combination archival and field study. Classification results from expert judgment of functional job…
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Optimized Signaling Method for High-Speed Transmission Channels with Higher Order Transfer Function
NASA Astrophysics Data System (ADS)
Ševčík, Břetislav; Brančík, Lubomír; Kubíček, Michal
2017-08-01
In this paper, the selected results from testing of optimized CMOS friendly signaling method for high-speed communications over cables and printed circuit boards (PCBs) are presented and discussed. The proposed signaling scheme uses modified concept of pulse width modulated (PWM) signal which enables to better equalize significant channel losses during data high-speed transmission. Thus, the very effective signaling method to overcome losses in transmission channels with higher order transfer function, typical for long cables and multilayer PCBs, is clearly analyzed in the time and frequency domain. Experimental results of the measurements include the performance comparison of conventional PWM scheme and clearly show the great potential of the modified signaling method for use in low power CMOS friendly equalization circuits, commonly considered in modern communication standards as PCI-Express, SATA or in Multi-gigabit SerDes interconnects.
Performance of DBS-Radio using concatenated coding and equalization
NASA Technical Reports Server (NTRS)
Gevargiz, J.; Bell, D.; Truong, L.; Vaisnys, A.; Suwitra, K.; Henson, P.
1995-01-01
The Direct Broadcast Satellite-Radio (DBS-R) receiver is being developed for operation in a multipath Rayleigh channel. This receiver uses equalization and concatenated coding, in addition to open loop and closed loop architectures for carrier demodulation and symbol synchronization. Performance test results of this receiver are presented in both AWGN and multipath Rayleigh channels. Simulation results show that the performance of the receiver operating in a multipath Rayleigh channel is significantly improved by using equalization. These results show that fractional-symbol equalization offers a performance advantage over full symbol equalization. Also presented is the base-line performance of the DBS-R receiver using concatenated coding and interleaving.
Novel Estimation of Pilot Performance Characteristics
NASA Technical Reports Server (NTRS)
Bachelder, Edward N.; Aponso, Bimal
2017-01-01
Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.
Mitigating component performance variation
Gara, Alan G.; Sylvester, Steve S.; Eastep, Jonathan M.; Nagappan, Ramkumar; Cantalupo, Christopher M.
2018-01-09
Apparatus and methods may provide for characterizing a plurality of similar components of a distributed computing system based on a maximum safe operation level associated with each component and storing characterization data in a database and allocating non-uniform power to each similar component based at least in part on the characterization data in the database to substantially equalize performance of the components.
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme.
Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun
2015-01-01
Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation.
Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme
Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun
2015-01-01
Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation. PMID:25709942
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
75 FR 63472 - SES Performance Review Board-Appointment of Members
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
... EQUAL EMPLOYMENT OPPORTUNITY COMMISSION SES Performance Review Board--Appointment of Members AGENCY: Equal Employment Opportunity Commission. ACTION: Notice. SUMMARY: Notice is hereby given of the appointment of members to the Performance Review Board of the Equal Employment Opportunity Commission. FOR...
Finger vein recognition based on finger crease location
NASA Astrophysics Data System (ADS)
Lu, Zhiying; Ding, Shumeng; Yin, Jing
2016-07-01
Finger vein recognition technology has significant advantages over other methods in terms of accuracy, uniqueness, and stability, and it has wide promising applications in the field of biometric recognition. We propose using finger creases to locate and extract an object region. Then we use linear fitting to overcome the problem of finger rotation in the plane. The method of modular adaptive histogram equalization (MAHE) is presented to enhance image contrast and reduce computational cost. To extract the finger vein features, we use a fusion method, which can obtain clear and distinguishable vein patterns under different conditions. We used the Hausdorff average distance algorithm to examine the recognition performance of the system. The experimental results demonstrate that MAHE can better balance the recognition accuracy and the expenditure of time compared with three other methods. Our resulting equal error rate throughout the total procedure was 3.268% in a database of 153 finger vein images.
Yousefi, Siavash; Qin, Jia; Zhi, Zhongwei; Wang, Ruikang K
2013-02-01
Optical microangiography is an imaging technology that is capable of providing detailed functional blood flow maps within microcirculatory tissue beds in vivo. Some practical issues however exist when displaying and quantifying the microcirculation that perfuses the scanned tissue volume. These issues include: (I) Probing light is subject to specular reflection when it shines onto sample. The unevenness of the tissue surface makes the light energy entering the tissue not uniform over the entire scanned tissue volume. (II) The biological tissue is heterogeneous in nature, meaning the scattering and absorption properties of tissue would attenuate the probe beam. These physical limitations can result in local contrast degradation and non-uniform micro-angiogram images. In this paper, we propose a post-processing method that uses Rayleigh contrast-limited adaptive histogram equalization to increase the contrast and improve the overall appearance and uniformity of optical micro-angiograms without saturating the vessel intensity and changing the physical meaning of the micro-angiograms. The qualitative and quantitative performance of the proposed method is compared with those of common histogram equalization and contrast enhancement methods. We demonstrate that the proposed method outperforms other existing approaches. The proposed method is not limited to optical microangiography and can be used in other image modalities such as photo-acoustic tomography and scanning laser confocal microscopy.
An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism
NASA Astrophysics Data System (ADS)
Nagata, Yuichi
Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arroyo, F.; Fernandez-Pereira, C.; Olivares, J.
2009-04-15
In this article, a hydrometallurgical method for the selective recovery of germanium from fly ash (FA) has been tested at pilot plant scale. The pilot plant flowsheet comprised a first stage of water leaching of FA, and a subsequent selective recovery of the germanium from the leachate by solvent extraction method. The solvent extraction method was based on Ge complexation with catechol in an aqueous solution followed by the extraction of the Ge-catechol complex (Ge(C{sub 6}H{sub 4}O{sub 2}){sub 3}{sup 2-}) with an extracting organic reagent (trioctylamine) diluted in an organic solvent (kerosene), followed by the subsequent stripping of the organicmore » extract. The process has been tested on a FA generated in an integrated gasification with combined cycle (IGCC) process. The paper describes the designed 5 kg/h pilot plant and the tests performed on it. Under the operational conditions tested, approximately 50% of germanium could be recovered from FA after a water extraction at room temperature. Regarding the solvent extraction method, the best operational conditions for obtaining a concentrated germanium-bearing solution practically free of impurities were as follows: extraction time equal to 20 min; aqueous phase/organic phase volumetric ratio equal to 5; stripping with 1 M NaOH, stripping time equal to 30 min, and stripping phase/organic phase volumetric ratio equal to 5. 95% of germanium were recovered from water leachates using those conditions.« less
Ryu, Ehri; Cheong, Jeewon
2017-01-01
In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248
Thresholding histogram equalization.
Chuang, K S; Chen, S; Hwang, I M
2001-12-01
The drawbacks of adaptive histogram equalization techniques are the loss of definition on the edges of the object and overenhancement of noise in the images. These drawbacks can be avoided if the noise is excluded in the equalization transformation function computation. A method has been developed to separate the histogram into zones, each with its own equalization transformation. This method can be used to suppress the nonanatomic noise and enhance only certain parts of the object. This method can be combined with other adaptive histogram equalization techniques. Preliminary results indicate that this method can produce images with superior contrast.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
Automated retina identification based on multiscale elastic registration.
Figueiredo, Isabel N; Moura, Susana; Neves, Júlio S; Pinto, Luís; Kumar, Sunil; Oliveira, Carlos M; Ramos, João D
2016-12-01
In this work we propose a novel method for identifying individuals based on retinal fundus image matching. The method is based on the image registration of retina blood vessels, since it is known that the retina vasculature of an individual is a signature, i.e., a distinctive pattern of the individual. The proposed image registration consists of a multiscale affine registration followed by a multiscale elastic registration. The major advantage of this particular two-step image registration procedure is that it is able to account for both rigid and non-rigid deformations either inherent to the retina tissues or as a result of the imaging process itself. Afterwards a decision identification measure, relying on a suitable normalized function, is defined to decide whether or not the pair of images belongs to the same individual. The method is tested on a data set of 21721 real pairs generated from a total of 946 retinal fundus images of 339 different individuals, consisting of patients followed in the context of different retinal diseases and also healthy patients. The evaluation of its performance reveals that it achieves a very low false rejection rate (FRR) at zero FAR (the false acceptance rate), equal to 0.084, as well as a low equal error rate (EER), equal to 0.053. Moreover, the tests performed by using only the multiscale affine registration, and discarding the multiscale elastic registration, clearly show the advantage of the proposed approach. The outcome of this study also indicates that the proposed method is reliable and competitive with other existing retinal identification methods, and forecasts its future appropriateness and applicability in real-life applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Bryce, Jeffrey W.
1987-01-01
Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at low frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is studied which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panel would oscillate at equal amplitude, to give equal acoustic source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to be cut off, and therefore be nonpropagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is being investigated theoretically and experimentally. Progress to date is discussed.
NASA Astrophysics Data System (ADS)
Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar
2018-06-01
The Equal-Width and Modified Equal-Width equations are used as a model in partial differential equations for the simulation of one-dimensional wave transmission in nonlinear media with dispersion processes. In this article we have employed extend simple equation method and the exp(-varphi(ξ)) expansion method to construct the exact traveling wave solutions of equal width and modified equal width equations. The obtained results are novel and have numerous applications in current areas of research in mathematical physics. It is exposed that our method, with the help of symbolic computation, provides a effective and powerful mathematical tool for solving different kind nonlinear wave problems.
Modal ring method for the scattering of sound
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1993-01-01
The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.
The Impact of Hospital Size on CMS Hospital Profiling.
Sosunov, Eugene A; Egorova, Natalia N; Lin, Hung-Mo; McCardle, Ken; Sharma, Vansh; Gelijns, Annetine C; Moskowitz, Alan J
2016-04-01
The Centers for Medicare & Medicaid Services (CMS) profile hospitals using a set of 30-day risk-standardized mortality and readmission rates as a basis for public reporting. These measures are affected by hospital patient volume, raising concerns about uniformity of standards applied to providers with different volumes. To quantitatively determine whether CMS uniformly profile hospitals that have equal performance levels but different volumes. Retrospective analysis of patient-level and hospital-level data using hierarchical logistic regression models with hospital random effects. Simulation of samples including a subset of hospitals with different volumes but equal poor performance (hospital effects=+3 SD in random-effect logistic model). A total of 1,085,568 Medicare fee-for-service patients undergoing 1,494,993 heart failure admissions in 4930 hospitals between July 1, 2005 and June 30, 2008. CMS methodology was used to determine the rank and proportion (by volume) of hospitals reported to perform "Worse than US National Rate." Percent of hospitals performing "Worse than US National Rate" was ∼40 times higher in the largest (fifth quintile by volume) compared with the smallest hospitals (first quintile). A similar gradient was seen in a cohort of 100 hospitals with simulated equal poor performance (0%, 0%, 5%, 20%, and 85% in quintiles 1 to 5) effectively leaving 78% of poor performers undetected. Our results illustrate the disparity of impact that the current CMS method of hospital profiling has on hospitals with higher volumes, translating into lower thresholds for detection and reporting of poor performance.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-01
When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.
Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z
2015-01-01
Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.
NASA Astrophysics Data System (ADS)
Fan, Y. Z.; Zuo, Z. G.; Liu, S. H.; Wu, Y. L.; Sha, Y. J.
2012-11-01
Primary formulation derivation indicates that the dimension of one existing centrifugal boiler circulation pump casing is too large. As great manufacture cost can be saved by dimension decrease, a numerical simulation research is developed in this paper on dimension decrease for annular casing of this pump with a specific speed equaling to 189, which aims at finding an appropriately smaller dimension of the casing while hydraulic performance and strength performance will hardly be changed according to the requirements of the cooperative company. The research object is one existing centrifugal pump with a diffuser and a semi-spherical annular casing, working as the boiler circulation pump for (ultra) supercritical units in power plants. Dimension decrease, the modification method, is achieved by decreasing the existing casing's internal radius (marked as "Ri0") while keeping the wall thickness. The research analysis is based on primary formulation derivation, CFD (Computational Fluid Dynamics) simulation and FEM (Finite Element Method) simulation. Primary formulation derivation estimates that a design casing's internal radius should be less than 0.75 Ri0. CFD analysis indicates that smaller casing with 0.75 Ri0 has a worse hydraulic performance when working at large flow rates and a better hydraulic performance when working at small flow rates. In consideration of hydraulic performance and dimension decrease, an appropriate casing's internal radius is determined, which equals to 0.875 Ri0. FEM analysis then confirms that modified pump casing has nearly the same strength performance as the existing pump casing. It is concluded that dimension decrease can be an economical method as well as a practical method for large pumps in engineering fields.
Software for computerised analysis of cardiotocographic traces.
Romano, M; Bifulco, P; Ruffo, M; Improta, G; Clemente, F; Cesarelli, M
2016-02-01
Despite the widespread use of cardiotocography in foetal monitoring, the evaluation of foetal status suffers from a considerable inter and intra-observer variability. In order to overcome the main limitations of visual cardiotocographic assessment, computerised methods to analyse cardiotocographic recordings have been recently developed. In this study, a new software for automated analysis of foetal heart rate is presented. It allows an automatic procedure for measuring the most relevant parameters derivable from cardiotocographic traces. Simulated and real cardiotocographic traces were analysed to test software reliability. In artificial traces, we simulated a set number of events (accelerations, decelerations and contractions) to be recognised. In the case of real signals, instead, results of the computerised analysis were compared with the visual assessment performed by 18 expert clinicians and three performance indexes were computed to gain information about performances of the proposed software. The software showed preliminary performance we judged satisfactory in that the results matched completely the requirements, as proved by tests on artificial signals in which all simulated events were detected from the software. Performance indexes computed in comparison with obstetricians' evaluations are, on the contrary, not so satisfactory; in fact they led to obtain the following values of the statistical parameters: sensitivity equal to 93%, positive predictive value equal to 82% and accuracy equal to 77%. Very probably this arises from the high variability of trace annotation carried out by clinicians. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Optimal digital filtering for tremor suppression.
Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R
2000-05-01
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com.
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
TG (Tri-Goniometry) technique: Obtaining perfect angles in Z-plasty planning with a simple ruler.
Görgülü, Tahsin; Olgun, Abdulkerim
2016-03-01
The Z-plasty is used frequently in hand surgery to release post-burn scar contractures. Correct angles and equalization of each limb are the most important parts of the Z-plasty technique. A simple ruler is enough for equalization of limb but a goniometer is needed for accuracy and equalization of angles. Classically, angles of 30°, 45°, 60°, 75°, and 90° are used. These angles are important when elongating a contracture line or decreasing tension. Our method uses only trigonometry coefficients and a simple ruler, which is easily obtained and sterilized, enabling surgeons to perform all types of Z-plasty perfectly without measuring angles using a goniometer. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
29 CFR 1620.15 - Jobs requiring equal skill in performance.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... As a simple illustration of the principle of equal skill, suppose that a man and a woman have jobs... majority of their work, whether or not these jobs require equal skill in performance will depend upon the nature of the work performed during the latter period to meet the requirements of the jobs. ...
Methods of increasing the harshness of texture of old concrete pavements--acid etching.
DOT National Transportation Integrated Search
1975-01-01
Of the four acids tested in the laboratory, the nitric and hydrochloric types were selected for field experiments. These two acids performed about equally well, the choice as to which to use is dictated by price and availability. In the field experim...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinhua; Zhang, Da; Liu, Bob, E-mail: bliu7@mgh.harvard.edu
2014-11-01
Purpose: The approach to equilibrium function has been used previously to calculate the radiation dose to a shift-invariant medium undergoing CT scans with constant tube current [Li, Zhang, and Liu, Med. Phys. 39, 5347–5352 (2012)]. The authors have adapted this method to CT scans with tube current modulation (TCM). Methods: For a scan with variable tube current, the scan range was divided into multiple subscan ranges, each with a nearly constant tube current. Then the dose calculation algorithm presented previously was applied. For a clinical CT scan series that presented tube current per slice, the authors adopted an efficient approachmore » that computed the longitudinal dose distribution for one scan length equal to the slice thickness, which center was at z = 0. The cumulative dose at a specific point was a summation of the contributions from all slices and the overscan. Results: The dose calculations performed for a total of four constant and variable tube current distributions agreed with the published results of Dixon and Boone [Med. Phys. 40, 111920 (14pp.) (2013)]. For an abdomen/pelvis scan of an anthropomorphic phantom (model ATOM 701-B, CIRS, Inc., VA) on a GE Lightspeed Pro 16 scanner with 120 kV, N × T = 20 mm, pitch = 1.375, z axis current modulation (auto mA), and angular current modulation (smart mA), dose measurements were performed using two lines of optically stimulated luminescence dosimeters, one of which was placed near the phantom center and the other on the surface. Dose calculations were performed on the central and peripheral axes of a cylinder containing water, whose cross-sectional mass was about equal to that of the ATOM phantom in its abdominal region, and the results agreed with the measurements within 28.4%. Conclusions: The described method provides an effective approach that takes into account subject size, scan length, and constant or variable tube current to evaluate CT dose to a shift-invariant medium. For a clinical CT scan, dose calculations may be performed with a water-containing cylinder whose cross-sectional mass is equal to that of the subject. This method has the potential to substantially improve evaluations of patient dose from clinical CT scans, compared to CTDI{sub vol}, size-specific dose estimate (SSDE), or the dose evaluated for a TCM scan with a constant tube current equal to the average tube current of the TCM scan.« less
29 CFR 1620.14 - Testing equality of jobs.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 4 2013-07-01 2013-07-01 false Testing equality of jobs. 1620.14 Section 1620.14 Labor... Testing equality of jobs. (a) In general. What constitutes equal skill, equal effort, or equal..., or responsibility required for the performance of jobs will not render the equal pay standard...
29 CFR 1620.14 - Testing equality of jobs.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 4 2014-07-01 2014-07-01 false Testing equality of jobs. 1620.14 Section 1620.14 Labor... Testing equality of jobs. (a) In general. What constitutes equal skill, equal effort, or equal..., or responsibility required for the performance of jobs will not render the equal pay standard...
29 CFR 1620.14 - Testing equality of jobs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Testing equality of jobs. 1620.14 Section 1620.14 Labor... Testing equality of jobs. (a) In general. What constitutes equal skill, equal effort, or equal..., or responsibility required for the performance of jobs will not render the equal pay standard...
29 CFR 1620.14 - Testing equality of jobs.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 4 2012-07-01 2012-07-01 false Testing equality of jobs. 1620.14 Section 1620.14 Labor... Testing equality of jobs. (a) In general. What constitutes equal skill, equal effort, or equal..., or responsibility required for the performance of jobs will not render the equal pay standard...
29 CFR 1620.14 - Testing equality of jobs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 4 2011-07-01 2011-07-01 false Testing equality of jobs. 1620.14 Section 1620.14 Labor... Testing equality of jobs. (a) In general. What constitutes equal skill, equal effort, or equal..., or responsibility required for the performance of jobs will not render the equal pay standard...
Hybrid acousto-optic and digital equalization for microwave digital radio channels
NASA Astrophysics Data System (ADS)
Anderson, C. S.; Vanderlugt, A.
1990-11-01
Digital radio transmission systems use complex modulation schemes that require powerful signal-processing techniques to correct channel distortions and to minimize BERs. This paper proposes combining the computation power of acoustooptic processing and the accuracy of digital processing to produce a hybrid channel equalizer that exceeds the performance of digital equalization alone. Analysis shows that a hybrid equalizer for 256-level quadrature amplitude modulation (QAM) performs better than a digital equalizer for 64-level QAM.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
NASA Astrophysics Data System (ADS)
Maghrabi, Mahmoud M. T.; Kumar, Shiva; Bakr, Mohamed H.
2018-02-01
This work introduces a powerful digital nonlinear feed-forward equalizer (NFFE), exploiting multilayer artificial neural network (ANN). It mitigates impairments of optical communication systems arising due to the nonlinearity introduced by direct photo-detection. In a direct detection system, the detection process is nonlinear due to the fact that the photo-current is proportional to the absolute square of the electric field intensity. The proposed equalizer provides the most efficient computational cost with high equalization performance. Its performance is comparable to the benchmark compensation performance achieved by maximum-likelihood sequence estimator. The equalizer trains an ANN to act as a nonlinear filter whose impulse response removes the intersymbol interference (ISI) distortions of the optical channel. Owing to the proposed extensive training of the equalizer, it achieves the ultimate performance limit of any feed-forward equalizer (FFE). The performance and efficiency of the equalizer is investigated by applying it to various practical short-reach fiber optic communication system scenarios. These scenarios are extracted from practical metro/media access networks and data center applications. The obtained results show that the ANN-NFFE compensates for the received BER degradation and significantly increases the tolerance to the chromatic dispersion distortion.
A study of the tolerance block approach to special stratification. [winter wheat in Kansas
NASA Technical Reports Server (NTRS)
Richardson, W. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Twelve winter wheat LACIE segments in Kansas were used to compare the performance of three clustering methods: (1) BCLUST, which uses a spectral distance function to accumulate clusters; (2) blocks-alone, which divides spectral space into equally populated blocks; and (3) block-seeds, which uses spectral means of blocks-alone as seeds for accumulating distance-type clusters. Both BCLUST and block-seeds performed equally well and outperformed blocks-alone significantly. Their average variance ratio of about 0.5 showed imperfect separation of wheat from non-wheat. This result points to the need to explore the achievable crop separability in the spectral/temporal domain, and suggest evaluating derived features rather than data channels as a means to achieve purer spectral strata.
Design optimization of sinusoidal glass honeycomb for flat plate solar collectors
NASA Technical Reports Server (NTRS)
Mcmurrin, J. C.; Buchberg, H.
1980-01-01
The design of honeycomb made of sinusoidally corrugated glass strips was optimized for use in water-cooled, single-glazed flat plate solar collectors with non-selective black absorbers. Cell diameter (d), cell height (L), and pitch/diameter ratio (P/d) maximizing solar collector performance and cost effectiveness for given cell wall thickness (t sub w) and optical properties of glass were determined from radiative and convective honeycomb characteristics and collector performance all calculated with experimentally validated algorithms. Relative lifetime values were estimated from present materials costs and postulated production methods for corrugated glass honeycomb cover assemblies. A honeycomb with P/d = 1.05, d = 17.4 mm, L = 146 mm and t sub w = 0.15 mm would provide near-optimal performance over the range delta T sub C greater than or equal to 0 C and less than or equal to 80 C and be superior in performance and cost effectiveness to a non-honeycomb collector with a 0.92/0.12 selective black absorber.
Method and apparatus for holding two separate metal pieces together for welding
NASA Technical Reports Server (NTRS)
Mcclure, S. R. (Inventor)
1980-01-01
A method of holding two separate metal pieces together for welding is described including the steps of overlapping a portion of one of the metal pieces on a portion of the other metal piece, encasing the overlapping metal piece in a compressible device, drawing the compressible device into an enclosure, and compressing a portion of the compressible device around the overlapping portions of the metal pieces for holding the metal pieces under constant and equal pressure during welding. The preferred apparatus for performing the method utilizes a support mechanism to support the two separate metal pieces in an overlapping configuration; a compressible device surrounding the support mechanism and at least one of the metal pieces, and a compressing device surrounding the compressible device for compressing the compressible device around the overlapping portions of the metal pieces, thus providing constant and equal pressure at all points on the overlapping portions of the metal pieces.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Method for evaluating wind turbine wake effects on wind farm performance
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Spera, D. A.
1985-01-01
A method of testing the performance of a cluster of wind turbine units an data analysis equations are presented which together form a simple and direct procedure for determining the reduction in energy output caused by the wake of an upwind turbine. This method appears to solve the problems presented by data scatter and wind variability. Test data from the three-unit Mod-2 wind turbine cluster at Goldendale, Washington, are analyzed to illustrate the application of the proposed method. In this sample case the reduction in energy was found to be about 10 percent when the Mod-2 units were separated a distance equal to seven diameters and winds were below rated.
Yousefi, Siavash; Qin, Jia; Zhi, Zhongwei
2013-01-01
Optical microangiography is an imaging technology that is capable of providing detailed functional blood flow maps within microcirculatory tissue beds in vivo. Some practical issues however exist when displaying and quantifying the microcirculation that perfuses the scanned tissue volume. These issues include: (I) Probing light is subject to specular reflection when it shines onto sample. The unevenness of the tissue surface makes the light energy entering the tissue not uniform over the entire scanned tissue volume. (II) The biological tissue is heterogeneous in nature, meaning the scattering and absorption properties of tissue would attenuate the probe beam. These physical limitations can result in local contrast degradation and non-uniform micro-angiogram images. In this paper, we propose a post-processing method that uses Rayleigh contrast-limited adaptive histogram equalization to increase the contrast and improve the overall appearance and uniformity of optical micro-angiograms without saturating the vessel intensity and changing the physical meaning of the micro-angiograms. The qualitative and quantitative performance of the proposed method is compared with those of common histogram equalization and contrast enhancement methods. We demonstrate that the proposed method outperforms other existing approaches. The proposed method is not limited to optical microangiography and can be used in other image modalities such as photo-acoustic tomography and scanning laser confocal microscopy. PMID:23482880
A Fourier Method for Sidelobe Reduction in Equally Spaced Linear Arrays
NASA Astrophysics Data System (ADS)
Safaai-Jazi, Ahmad; Stutzman, Warren L.
2018-04-01
Uniformly excited, equally spaced linear arrays have a sidelobe level larger than -13.3 dB, which is too high for many applications. This limitation can be remedied by nonuniform excitation of array elements. We present an efficient method for sidelobe reduction in equally spaced linear arrays with low penalty on the directivity. The method involves the following steps: construction of a periodic function containing only the sidelobes of the uniformly excited array, calculation of the Fourier series of this periodic function, subtracting the series from the array factor of the original uniformly excited array after it is truncated, and finally mitigating the truncation effects which yields significant increase in sidelobe level reduction. A sidelobe reduction factor is incorporated into element currents that makes much larger sidelobe reductions possible and also allows varying the sidelobe level incrementally. It is shown that such newly formed arrays can provide sidelobe levels that are at least 22.7 dB below those of the uniformly excited arrays with the same size and number of elements. Analytical expressions for element currents are presented. Radiation characteristics of the sidelobe-reduced arrays introduced here are examined, and numerical results for directivity, sidelobe level, and half-power beam width are presented for example cases. Performance improvements over popular conventional array synthesis methods, such as Chebyshev and linear current tapered arrays, are obtained with the new method.
Yoganandan, Narayan; Arun, Mike W J; Humm, John; Pintar, Frank A
2014-10-01
The first objective of the study was to determine the thorax and abdomen deflection time corridors using the equal stress equal velocity approach from oblique side impact sled tests with postmortem human surrogates fitted with chestbands. The second purpose of the study was to generate deflection time corridors using impulse momentum methods and determine which of these methods best suits the data. An anthropometry-specific load wall was used. Individual surrogate responses were normalized to standard midsize male anthropometry. Corridors from the equal stress equal velocity approach were very similar to those from impulse momentum methods, thus either method can be used for this data. Present mean and plus/minus one standard deviation abdomen and thorax deflection time corridors can be used to evaluate dummies and validate complex human body finite element models.
Using recurrent neural networks for adaptive communication channel equalization.
Kechriotis, G; Zervas, E; Manolakos, E S
1994-01-01
Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.
A novel load balanced energy conservation approach in WSN using biogeography based optimization
NASA Astrophysics Data System (ADS)
Kaushik, Ajay; Indu, S.; Gupta, Daya
2017-09-01
Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN
Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas
2011-01-01
BACKGROUND: Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses’ skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses’ care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. METHODS: This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). RESULTS: The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses’ documentation competency, it can be a qualified substitute for traditional instructor-led method. CONCLUSIONS: E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs. PMID:22224113
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Aghababaeian, Hamidreza; Sedaghat, Soheila; Tahery, Noorallah; Moghaddam, Ali Sadeghi; Maniei, Mohammad; Bahrami, Nosrat; Ahvazi, Ladan Araghi
2013-12-01
Educating emergency medical staffs in triage skills is an important aspect of disaster preparedness. The aim of the study was to compare the effect of role-playing and educational video presentation on the learning and performance of the emergency medical service staffs in Khozestan, Iran A total of 144 emergency technicians were randomly classified into two groups. A researcher trained the first group using an educational video method and the second group with a role-playing method. Data were collected before, immediately, and 15 days after training using a questionnaire covering the three domains of demographic information, triage knowledge, and triage performance. The data were analyzed using defined knowledge and performance parameters. There was no significant difference between the two training methods on performance and immediate knowledge (P = .2), lasting knowledge (P=.05) and immediate performance (P = .35), but there was a statistical advantage for the role-playing method on lasting performance (P = .02). The two educational methods equally increase knowledge and performance, but the role-playing method may have a more desirable and lasting effect on performance.
Equalizer reduces SNP bias in Affymetrix microarrays.
Quigley, David
2015-07-30
Gene expression microarrays measure the levels of messenger ribonucleic acid (mRNA) in a sample using probe sequences that hybridize with transcribed regions. These probe sequences are designed using a reference genome for the relevant species. However, most model organisms and all humans have genomes that deviate from their reference. These variations, which include single nucleotide polymorphisms, insertions of additional nucleotides, and nucleotide deletions, can affect the microarray's performance. Genetic experiments comparing individuals bearing different population-associated single nucleotide polymorphisms that intersect microarray probes are therefore subject to systemic bias, as the reduction in binding efficiency due to a technical artifact is confounded with genetic differences between parental strains. This problem has been recognized for some time, and earlier methods of compensation have attempted to identify probes affected by genome variants using statistical models. These methods may require replicate microarray measurement of gene expression in the relevant tissue in inbred parental samples, which are not always available in model organisms and are never available in humans. By using sequence information for the genomes of organisms under investigation, potentially problematic probes can now be identified a priori. However, there is no published software tool that makes it easy to eliminate these probes from an annotation. I present equalizer, a software package that uses genome variant data to modify annotation files for the commonly used Affymetrix IVT and Gene/Exon platforms. These files can be used by any microarray normalization method for subsequent analysis. I demonstrate how use of equalizer on experiments mapping germline influence on gene expression in a genetic cross between two divergent mouse species and in human samples significantly reduces probe hybridization-induced bias, reducing false positive and false negative findings. The equalizer package reduces probe hybridization bias from experiments performed on the Affymetrix microarray platform, allowing accurate assessment of germline influence on gene expression.
Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas
2011-01-01
Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses' skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses' care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses' documentation competency, it can be a qualified substitute for traditional instructor-led method. E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs.
Simulation of 3-D viscous compressible flow in multistage turbomachinery by finite element methods
NASA Astrophysics Data System (ADS)
Sleiman, Mohamad
1999-11-01
The flow in a multistage turbomachinery blade row is compressible, viscous, and unsteady. Complex flow features such as boundary layers, wake migration from upstream blade rows, shocks, tip leakage jets, and vortices interact together as the flow convects through the stages. These interactions contribute significantly to the aerodynamic losses of the system and degrade the performance of the machine. The unsteadiness also leads to blade vibration and a shortening of its life. It is therefore difficult to optimize the design of a blade row, whether aerodynamically or structurally, in isolation, without accounting for the effects of the upstream and downstream rows. The effects of axial spacing, blade count, clocking (relative position of follow-up rotors with respect to wakes shed by upstream ones), and levels of unsteadiness may have a significance on performance and durability. In this Thesis, finite element formulations for the simulation of multistage turbomachinery are presented in terms of the Reynolds-averaged Navier-Stokes equations for three-dimensional steady or unsteady, viscous, compressible, turbulent flows. Three methodologies are presented and compared. First, a steady multistage analysis using a a-mixing- plane model has been implemented and has been validated against engine data. For axial machines, it has been found that the mixing plane simulation methods match very well the experimental data. However, the results for a centrifugal stage, consisting of an impeller followed by a vane diffuser of equal pitch, show flagrant inconsistency with engine performance data, indicating that the mixing plane method has been found to be inappropriate for centrifugal machines. Following these findings, a more complete unsteady multistage model has been devised for a configuration with equal number of rotor and stator blades (equal pitches). Non-matching grids are used at the rotor-stator interface and an implicit interpolation procedure devised to ensure continuity of fluxes across. This permits the rotor and stator equations to be solved in a fully- coupled manner, allowing larger time steps in attaining a time-periodic solution. This equal pitch approach has been validated on the complex geometry of a centrifugal stage. Finally, for a stage configuration with unequal pitches, the time-inclined method, developed by Giles (1991) for 2-D viscous compressible flow, has been extended to 3-D and formulated in terms of the physical solution vector U, rather than Q, a non-physical one. The method has been evaluated for unsteady flow through a rotor blade passage of the power turbine of a turboprop.
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
A psychophysical comparison of two methods for adaptive histogram equalization.
Zimmerman, J B; Cousins, S B; Hartzell, K M; Frisse, M E; Kahn, M G
1989-05-01
Adaptive histogram equalization (AHE) is a method for adaptive contrast enhancement of digital images. It is an automatic, reproducible method for the simultaneous viewing of contrast within a digital image with a large dynamic range. Recent experiments have shown that in specific cases, there is no significant difference in the ability of AHE and linear intensity windowing to display gray-scale contrast. More recently, a variant of AHE which limits the allowed contrast enhancement of the image has been proposed. This contrast-limited adaptive histogram equalization (CLAHE) produces images in which the noise content of an image is not excessively enhanced, but in which sufficient contrast is provided for the visualization of structures within the image. Images processed with CLAHE have a more natural appearance and facilitate the comparison of different areas of an image. However, the reduced contrast enhancement of CLAHE may hinder the ability of an observer to detect the presence of some significant gray-scale contrast. In this report, a psychophysical observer experiment was performed to determine if there is a significant difference in the ability of AHE and CLAHE to depict gray-scale contrast. Observers were presented with computed tomography (CT) images of the chest processed with AHE and CLAHE. Subtle artificial lesions were introduced into some images. The observers were asked to rate their confidence regarding the presence of the lesions; this rating-scale data was analyzed using receiver operating characteristic (ROC) curve techniques. These ROC curves were compared for significant differences in the observers' performances. In this report, no difference was found in the abilities of AHE and CLAHE to depict contrast information.
Structural Analysis of Women’s Heptathlon
Gassmann, Freya; Fröhlich, Michael; Emrich, Eike
2016-01-01
The heptathlon comprises the results of seven single disciplines, assuming an equal influence from each discipline, depending on the measured performance. Data analysis was based on the data recorded for the individual performances of the 10 winning heptathletes in the World Athletics Championships from 1987 to 2013 and the Olympic Games from 1988 to 2012. In addition to descriptive analysis methods, correlations, bivariate and multivariate linear regressions, and panel data regressions were used. The transformation of the performances from seconds, centimeters, and meters into points showed that the individual disciplines do not equally affect the overall competition result. The currently valid conversion formula for the run, jump, and throw disciplines prefers the sprint and jump disciplines but penalizes the athletes performing in the 800 m run, javelin throw, and shotput disciplines. Furthermore, 21% to 48% of the variance of the sum of points can be attributed to the performances in the disciplines of long jump, 200 m sprint, 100 m hurdles, and high jump. To balance the effects of the single disciplines in the heptathlon, the formula to calculate points should be reevaluated. PMID:29910260
Pareto Tracer: a predictor-corrector method for multi-objective optimization problems
NASA Astrophysics Data System (ADS)
Martín, Adanay; Schütze, Oliver
2018-03-01
This article proposes a novel predictor-corrector (PC) method for the numerical treatment of multi-objective optimization problems (MOPs). The algorithm, Pareto Tracer (PT), is capable of performing a continuation along the set of (local) solutions of a given MOP with k objectives, and can cope with equality and box constraints. Additionally, the first steps towards a method that manages general inequality constraints are also introduced. The properties of PT are first discussed theoretically and later numerically on several examples.
[Interlaboratory Study on Evaporation Residue Test for Food Contact Products (Report 1)].
Ohno, Hiroyuki; Mutsuga, Motoh; Abe, Tomoyuki; Abe, Yutaka; Amano, Homare; Ishihara, Kinuyo; Ohsaka, Ikue; Ohno, Haruka; Ohno, Yuichiro; Ozaki, Asako; Kakihara, Yoshiteru; Kobayashi, Hisashi; Sakuragi, Hiroshi; Shibata, Hiroshi; Shirono, Katsuhiro; Sekido, Haruko; Takasaka, Noriko; Takenaka, Yu; Tajima, Yoshiyasu; Tanaka, Aoi; Tanaka, Hideyuki; Tonooka, Hiroyuki; Nakanishi, Toru; Nomura, Chie; Haneishi, Nahoko; Hayakawa, Masato; Miura, Toshihiko; Yamaguchi, Miku; Watanabe, Kazunari; Sato, Kyoko
2018-01-01
An interlaboratory study was performed to evaluate the equivalence between an official method and a modified method of evaporation residue test using three food-simulating solvents (water, 4% acetic acid and 20% ethanol), based on the Japanese Food Sanitation Law for food contact products. Twenty-three laboratories participated, and tested the evaporation residues of nine test solutions as blind duplicates. For evaporation, a water bath was used in the official method, and a hot plate in the modified method. In most laboratories, the test solutions were heated until just prior to evaporation to dryness, and then allowed to dry under residual heat. Statistical analysis revealed that there was no significant difference between the two methods, regardless of the heating equipment used. Accordingly, the modified method provides performance equal to the official method, and is available as an alternative method.
NASA Astrophysics Data System (ADS)
Yu, Yali; Wang, Mengxia; Lima, Dimas
2018-04-01
In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.
40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...
40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...
40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...
Out-of-equilibrium protocol for Rényi entropies via the Jarzynski equality.
Alba, Vincenzo
2017-06-01
In recent years entanglement measures, such as the von Neumann and the Rényi entropies, provided a unique opportunity to access elusive features of quantum many-body systems. However, extracting entanglement properties analytically, experimentally, or in numerical simulations can be a formidable task. Here, by combining the replica trick and the Jarzynski equality we devise an alternative effective out-of-equilibrium protocol for measuring the equilibrium Rényi entropies. The key idea is to perform a quench in the geometry of the replicas. The Rényi entropies are obtained as the exponential average of the work performed during the quench. We illustrate an application of the method in classical Monte Carlo simulations, although it could be useful in different contexts, such as in quantum Monte Carlo, or experimentally in cold-atom systems. The method is most effective in the quasistatic regime, i.e., for a slow quench. As a benchmark, we compute the Rényi entropies in the Ising universality class in 1+1 dimensions. We find perfect agreement with the well-known conformal field theory predictions.
Information granules in image histogram analysis.
Wieclawek, Wojciech
2018-04-01
A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parallel fast multipole boundary element method applied to computational homogenization
NASA Astrophysics Data System (ADS)
Ptaszny, Jacek
2018-01-01
In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.
Least-squares finite element method for fluid dynamics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1989-01-01
An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.
Novel Dynamic Framed-Slotted ALOHA Using Litmus Slots in RFID Systems
NASA Astrophysics Data System (ADS)
Yim, Soon-Bin; Park, Jongho; Lee, Tae-Jin
Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular protocols to resolve tag collisions in RFID systems. In DFSA, it is widely known that the optimal performance is achieved when the frame size is equal to the number of tags. So, a reader dynamically adjusts the next frame size according to the current number of tags. Thus it is important to estimate the number of tags exactly. In this paper, we propose a novel tag estimation and identification method using litmus (test) slots for DFSA. We compare the performance of the proposed method with those of existing methods by analysis. We conduct simulations and show that our scheme improves the speed of tag identification.
NASA Technical Reports Server (NTRS)
Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.
1991-01-01
A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).
Direct Optimal Control of Duffing Dynamics
NASA Technical Reports Server (NTRS)
Oz, Hayrani; Ramsey, John K.
2002-01-01
The "direct control method" is a novel concept that is an attractive alternative and competitor to the differential-equation-based methods. The direct method is equally well applicable to nonlinear, linear, time-varying, and time-invariant systems. For all such systems, the method yields explicit closed-form control laws based on minimization of a quadratic control performance measure. We present an application of the direct method to the dynamics and optimal control of the Duffing system where the control performance measure is not restricted to a quadratic form and hence may include a quartic energy term. The results we present in this report also constitute further generalizations of our earlier work in "direct optimal control methodology." The approach is demonstrated for the optimal control of the Duffing equation with a softening nonlinear stiffness.
Tůma, Petr; Gojda, Jan
2015-08-01
A CE method with contactless conductivity detection has been developed for the clinical determination of the branched chain amino acids (BCAAs) valine, isoleucine and leucine in human blood plasma. The CE separation was performed in an optimised BGE with composition of 3.2 M acetic acid in 20% v/v methanol, pH 2.0. The achieved separation time was 125 s when using a capillary with an effective length of 14.7 cm, electric field intensity of 0.96 kV/cm and simultaneous application of a hydrodynamic pressure of 50 mbar. The separation efficiency in blood plasma equalled 461 000 theoretical plates/m for valine and isoleucine, and 455 000 theoretical plates/m for leucine; the detection limits are equal to 0.4 μM for all three amino acids. The RSD values for repeatability of the migration time equalled 0.1% for measurements during a single day and 0.3% for measurements on different days; the RSD values for repeatability of the peak areas equalled 2.3-2.6% for measurements during a single day and 2.7-4.6% for measurements on different days. It followed from the performed tests that the plasmatic levels of BCAAs attain a maximum 60 min after intravenous application of an infusion of BCAAs. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Beyond one-size-fits-all: Tailoring diversity approaches to the representation of social groups.
Apfelbaum, Evan P; Stephens, Nicole M; Reagans, Ray E
2016-10-01
When and why do organizational diversity approaches that highlight the importance of social group differences (vs. equality) help stigmatized groups succeed? We theorize that social group members' numerical representation in an organization, compared with the majority group, influences concerns about their distinctiveness, and consequently, whether diversity approaches are effective. We combine laboratory and field methods to evaluate this theory in a professional setting, in which White women are moderately represented and Black individuals are represented in very small numbers. We expect that focusing on differences (vs. equality) will lead to greater performance and persistence among White women, yet less among Black individuals. First, we demonstrate that Black individuals report greater representation-based concerns than White women (Study 1). Next, we observe that tailoring diversity approaches to these concerns yields greater performance and persistence (Studies 2 and 3). We then manipulate social groups' perceived representation and find that highlighting differences (vs. equality) is more effective when groups' representation is moderate, but less effective when groups' representation is very low (Study 4). Finally, we content-code the diversity statements of 151 major U.S. law firms and find that firms that emphasize differences have lower attrition rates among White women, whereas firms that emphasize equality have lower attrition rates among racial minorities (Study 5). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Dereuddre, Rozemarijn; Van de Velde, Sarah; Bracke, Piet
2016-07-01
Despite generally low fertility rates in Europe, contraceptive behavior varies to a substantial extent. The dichotomy between Western, and Central and Eastern European countries is particularly relevant. Whereas the former are characterized by the widespread use of modern contraception, the latter show a high prevalence of traditional methods to control fertility. The current study aims to examine whether these differences can be attributed to differences in women's individual status, and in gender inequality at the couple and the country level. We combine data from the Generations and Gender Survey (2004-2011) and the Demographic Health Survey (2005-2009), covering seventeen European countries, to perform multinomial multilevel analyses. The results confirm that higher educated and employed women, and women who have an equal occupational status relative to their partner are more likely to use modern reversible contraception instead of no, traditional, or permanent methods. Absolute and relative employment are also positively related to using female instead of male methods. Furthermore, it is shown that higher levels of country-level gender equality are associated with a higher likelihood of using modern reversible and female methods, but not sterilization. Particularly country levels of gender equality are linked to the East-West divide in type of contraceptive method used. Our findings underscore that women's higher status is closely related to their use of effective, female contraception. Copyright © 2016 Elsevier Ltd. All rights reserved.
A method for normalizing pathology images to improve feature extraction for quantitative pathology.
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
2016-01-01
With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.
Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry
NASA Astrophysics Data System (ADS)
Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael
2015-05-01
This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Shah, S. N. R.; Sulong, N. H. Ramli; Shariati, Mahdi; Jumaat, M. Z.
2015-01-01
Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods. PMID:26452047
ERIC Educational Resources Information Center
Wheeler, Gregory D.
2010-01-01
Research indicates that many elementary students do not comprehend that the equal sign is an indication that an equality relation exists between two structures. Instead, they perceive the equal sign as an indication that a particular procedure is to be performed. As students mature, and as their exposure to the equal sign and equality relations in…
The report describes the implementation, theory of operation, and performance of an adjustable, 48 tap, surface wave transversal equalizer designed...for the Rome Air Development Center, Floyd Site Radar. The transversal equalizer achieves equalization of system distortion by an array of fixed taps...which provide leading and lagging echoes of the main signal. Equalization is achieved by the introduction of an equal but oppositely phased echo of
Edelstein, P H; Pasiecznik, K A; Yasui, V K; Meyer, R D
1982-01-01
Thirty-three strains of Legionella spp., 29 of which were L. pneumophila, were tested for their susceptibilities to erythromycin (EM), rosaramicin, tylosin, mycinamicin I (Sch-27897), and mycinamicin II (Sch-27896). Testing was performed using an agar dilution method with two different types of media: buffered charcoal yeast extract medium supplemented with 0.1% alpha-ketoglutarate (BCYE alpha) and filter-sterilized yeast extract medium with 0.1% alpha-ketoglutarate (BYE alpha). The minimal inhibitory concentrations (MICs) of the drugs tested relative to the MICs of erythromycin were: rosaramicin, MIC approximately equal to 0.2 EM MIC; tylosin, MIC approximately equal to 2 EM MIC; mycinamicin I, MIC approximately equal to 0.5 EM MIC; and mycinamicin II, MIC approximately equal to EM MIC. Both types of media caused equivalent partial inactivation of the macrolides which was apparently due entirely to pH effect. MICs on BCYE alpha were one to five times more than those observed on BYE alpha; this may be due to poorer growth on BYE alpha. PMID:7125633
EQUALS Investigations: Growth Patterns.
ERIC Educational Resources Information Center
Mayfield, Karen; Whitlow, Robert
EQUALS is a teacher education program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. The EQUALS program supports a problem-solving approach to mathematics which has students working in groups, uses active assessment methods, and incorporates a broad mathematics…
Bruner, L H; Carr, G J; Harbell, J W; Curren, R D
2002-06-01
An approach commonly used to measure new toxicity test method (NTM) performance in validation studies is to divide toxicity results into positive and negative classifications, and the identify true positive (TP), true negative (TN), false positive (FP) and false negative (FN) results. After this step is completed, the contingent probability statistics (CPS), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) are calculated. Although these statistics are widely used and often the only statistics used to assess the performance of toxicity test methods, there is little specific guidance in the validation literature on what values for these statistics indicate adequate performance. The purpose of this study was to begin developing data-based answers to this question by characterizing the CPS obtained from an NTM whose data have a completely random association with a reference test method (RTM). Determining the CPS of this worst-case scenario is useful because it provides a lower baseline from which the performance of an NTM can be judged in future validation studies. It also provides an indication of relationships in the CPS that help identify random or near-random relationships in the data. The results from this study of randomly associated tests show that the values obtained for the statistics vary significantly depending on the cut-offs chosen, that high values can be obtained for individual statistics, and that the different measures cannot be considered independently when evaluating the performance of an NTM. When the association between results of an NTM and RTM is random the sum of the complementary pairs of statistics (sensitivity + specificity, NPV + PPV) is approximately 1, and the prevalence (i.e., the proportion of toxic chemicals in the population of chemicals) and PPV are equal. Given that combinations of high sensitivity-low specificity or low specificity-high sensitivity (i.e., the sum of the sensitivity and specificity equal to approximately 1) indicate lack of predictive capacity, an NTM having these performance characteristics should be considered no better for predicting toxicity than by chance alone.
Gain equalization in cascaded optical amplifiers using short-period Bragg gratings
NASA Astrophysics Data System (ADS)
Rochette, Martin; Cortes, Pierre-Yves; Guy, Martin; LaRochelle, Sophie; Trepanier, Francois; Lauzon, Jocelyn
2000-12-01
Gain equalization of an amplifier is performed by introducing spectrally designed Bragg gratings in the mid-stage of a dual-stage erbium-doped fiber amplifier. The long-haul performances of the amplifier are evaluated using a 50 km recirculating loop. The results show a clear improvement of the transmission quality when equalizing the gain.
Multigrid contact detection method
NASA Astrophysics Data System (ADS)
He, Kejing; Dong, Shoubin; Zhou, Zhaoyao
2007-03-01
Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.
Assessment of metal artifact reduction methods in pelvic CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdoli, Mehrsima; Mehranian, Abolfazl; Ailianou, Angeliki
2016-04-15
Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulatedmore » datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.« less
Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.
Method for enhancing signals transmitted over optical fibers
Ogle, James W.; Lyons, Peter B.
1983-01-01
A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.
Pharyngeal Pressure Generation during Tongue-Hold Swallows across Age Groups
ERIC Educational Resources Information Center
Doeltgen, Sebastian H.; Macrae, Phoebe; Huckabee, Maggie-Lee
2011-01-01
Purpose: To compare the effects of the tongue-hold swallowing maneuver on pharyngeal pressure generation in healthy young and elderly research volunteers. Method: Sixty-eight healthy research volunteers (young, n = 34, mean age = 26.8 years, SD = 5.5; elderly, n = 34, mean age = 72.6 years, SD = 4.8; sex equally represented) performed 5…
Nonlinear channel equalization for QAM signal constellation using artificial neural networks.
Patra, J C; Pal, R N; Baliarsingh, R; Panda, G
1999-01-01
Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.
A method for normalizing pathology images to improve feature extraction for quantitative pathology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology imagesmore » by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.« less
Model-based RSA of a femoral hip stem using surface and geometrical shape models.
Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M
2006-07-01
Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.
Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems
NASA Astrophysics Data System (ADS)
He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.
2018-02-01
Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Srinivasan, Ramakrishna; Gustaveson, Mark B.
1990-01-01
Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselage lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cutoff, and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations.
101 Short Problems from EQUALS = 101 Problemas Cortos del programma EQUALS.
ERIC Educational Resources Information Center
Stenmark, Jean Kerr, Ed.
EQUALS is a teacher advisory program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. The program supports a problem-solving approach to mathematics, including having students working in groups, using active assessment methods, and incorporating a broad mathematics…
Fisher, James; Steele, James; Campos, Mario H.; Silva, Marcelo H.; Paoli, Antonio; Giessing, Jurgen; Bottaro, Martim
2018-01-01
Background The objective of the present study was to compare the effects of equal-volume resistance training (RT) performed with different training frequencies on muscle size and strength in trained young men. Methods Sixteen men with at least one year of RT experience were divided into two groups, G1 and G2, that trained each muscle group once and twice a week, respectively, for 10 weeks. Elbow flexor muscle thickness (MT) was measured using a B-Mode ultrasound and concentric peak torque of elbow extensors and flexors were assessed by an isokinetic dynamometer. Results ANOVA did not reveal group by time interactions for any variable, indicating no difference between groups for the changes in MT or PT of elbow flexors and extensors. Notwithstanding, MT of elbow flexors increased significantly (3.1%, P < 0.05) only in G1. PT of elbow flexors and extensors did not increase significantly for any group. Discussion The present study suggest that there were no differences in the results promoted by equal-volume resistance training performed once or twice a week on upper body muscle strength in trained men. Only the group performing one session per week significantly increased the MT of their elbow flexors. However, with either once or twice a week training, adaptations appear largely minimal in previously trained males.
NASA Astrophysics Data System (ADS)
Wang, Fumin; Shi, Meng; Chi, Nan
2016-10-01
Visible light communication (VLC) is one of the hottest research directions in wireless communication. It is safe, fast and free of electromagnetic interference. We carry out the visible light communication using DFTS-OFDM modulation mode through the headset port to and take equalization technique to compensate the channel. In this paper, we first test the feasibility of the DFTS-OFDM modulated VLC system by analyzing the constellation and the transmission error rate via the headset interface of the smartphone. Then we change the peak value of the signal generated by the AWG as well as the static current to find the best working point. We tested the effect of changing the up-sample number on the BER performance of the communication system, and compared the BER performance of 16QAM and 8QAM modulation in different equalization method. We also do experiment to find how distance affect the performance of the communication and the maximum communication rate that can be achieved. We successfully demonstrated a visible light communication system detected by a headset port of a smart phone for a 32QAM DFTS-OFDM modulated signal of 27.5kb/s over a 3-meter free space transmission. The light source is traditional phosphorescent white LED. This result, as far as we know, is the highest data rate of VLC system via headset port detection.
Etard, Christelle; Joshi, Swarnima; Stegmaier, Johannes; Mikut, Ralf; Strähle, Uwe
2017-12-01
A bottleneck in CRISPR/Cas9 genome editing is variable efficiencies of in silico-designed gRNAs. We evaluated the sensitivity of the TIDE method (Tracking of Indels by DEcomposition) introduced by Brinkman et al. in 2014 for assessing the cutting efficiencies of gRNAs in zebrafish. We show that this simple method, which involves bulk polymerase chain reaction amplification and Sanger sequencing, is highly effective in tracking well-performing gRNAs in pools of genomic DNA derived from injected embryos. The method is equally effective for tracing INDELs in heterozygotes.
Method for solvent extraction with near-equal density solutions
Birdwell, Joseph F.; Randolph, John D.; Singh, S. Paul
2001-01-01
Disclosed is a modified centrifugal contactor for separating solutions of near equal density. The modified contactor has a pressure differential establishing means that allows the application of a pressure differential across fluid in the rotor of the contactor. The pressure differential is such that it causes the boundary between solutions of near-equal density to shift, thereby facilitating separation of the phases. Also disclosed is a method of separating solutions of near-equal density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zawisza, I; Ren, L; Yin, F
Purpose: Respiratory-gated radiotherapy and dynamic tracking employ real-time imaging and surrogate motion-monitoring methods with tumor motion prediction in advance of real-time. This study investigated respiratory motion data length on prediction accuracy of tumor motion. Methods: Predictions generated from the algorithm are validated against a one-dimensional surrogate signal of amplitude versus time. Prediction consists of three major components: extracting top-ranked subcomponents from training data matching the last respiratory cycle; calculating weighting factors from best-matched subcomponents; fusing data proceeding best-matched subcomponents with respective weighting factors to form predictions. Predictions for one respiratory cycle (∼3-6seconds) were assessed using 351 patient data from themore » respiratory management device. Performance was evaluated for correlation coefficient and root mean square error (RMSE) between prediction and final respiratory cycle. Results: Respiratory prediction results fell into two classes, where best predictions for 70 cycles or less performed using relative prediction and greater than 70 cycles are predicted similarly using relative and derivative relative. For 70 respiratory cycles or less, the average correlation between prediction and final respiratory cycle was 0.9999±0.0001, 0.9999±0.0001, 0.9988±0.0003, 0.9985±0.0023, and 0.9981±0.0023 with RMSE values of 0.0091±0.0030, 0.0091±0.0030, 0.0305±0.0051, 0.0299±0.0259, and 0.0299±0.0259 for equal, relative, pattern, derivative equal and derivative relative weighting methods, respectively. Respectively, the total best prediction for each method was 37, 65, 20, 22, and 22. For data with greater than 70 cycles average correlation was 0.9999±0.0001, 0.9999±0.0001, 0.9988±0.0004, 0.9988±0.0020, and 0.9988±0.0020 with RMSE values of 0.0081±0.0031, 0.0082±0.0033, 0.0306±0.0056, 0.0218±0.0222, and 0.0218±0.0222 for equal, relative, pattern, derivative equal and derivative relative weighting methods, respectively. Respectively, the total best prediction for each method was 24, 44, 42, 30, and 45. Conclusion: The prediction algorithms are effective in estimating surrogate motion in advance. These results indicate an advantage in using relative prediction for shorter data and either relative or derivative relative prediction for longer data.« less
14 CFR 1253.515 - Compensation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... rate less than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and that are performed under similar working...
Code of Federal Regulations, 2010 CFR
2010-01-01
... less than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and that are performed under similar working conditions. ...
Code of Federal Regulations, 2010 CFR
2010-07-01
... less than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and that are performed under similar working conditions. ...
Code of Federal Regulations, 2010 CFR
2010-10-01
... less than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and that are performed under similar working conditions. ...
41 CFR 101-4.515 - Compensation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and that are performed under similar working conditions. ...
Code of Federal Regulations, 2010 CFR
2010-10-01
... less than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and which are performed under similar working conditions...
China's rural public health system performance: a cross-sectional study.
Tian, Miaomiao; Feng, Da; Chen, Xi; Chen, Yingchun; Sun, Xi; Xiang, Yuanxi; Yuan, Fang; Feng, Zhanchun
2013-01-01
In the past three years, the Government of China initiated health reform with rural public health system construction to achieve equal access to public health services for rural residents. The study assessed trends of public health services accessibility in rural China from 2008 to 2010, as well as the current situation about the China's rural public health system performance. The data were collected from a cross-sectional survey conducted in 2011, which used a multistage stratified random sampling method to select 12 counties and 118 villages from China. Three sets of indicators were chosen to measure the trends in access to coverage, equality and effectiveness of rural public health services. Data were disaggregated by provinces and by participants: hypertension patients, children, elderly and women. We examined the changes in equality across and within region. China's rural public health system did well in safe drinking water, children vaccinations and women hospital delivery. But more hypertension patients with low income could not receive regular healthcare from primary health institutions than those with middle and high income. In 2010, hypertension treatment rate of Qinghai in Western China was just 53.22% which was much lower than that of Zhejiang in Eastern China (97.27%). Meanwhile, low performance was showed in effectiveness of rural public health services. The rate of effective treatment for controlling their blood pressure within normal range was just 39.7%. The implementation of health reform since 2009 has led the public health development towards the right direction. Physical access to public health services had increased from 2008 to 2010. But, inter- and intra-regional inequalities in public health system coverage still exist. Strategies to improve the quality and equality of public health services in rural China need to be considered.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Vareková, R Svobodová; Koca, J
2006-02-01
The most common way to calculate charge distribution in a molecule is ab initio quantum mechanics (QM). Some faster alternatives to QM have also been developed, the so-called "equalization methods" EEM and ABEEM, which are based on DFT. We have implemented and optimized the EEM and ABEEM methods and created the EEM SOLVER and ABEEM SOLVER programs. It has been found that the most time-consuming part of equalization methods is the reduction of the matrix belonging to the equation system generated by the method. Therefore, for both methods this part was replaced by the parallel algorithm WIRS and implemented within the PVM environment. The parallelized versions of the programs EEM SOLVER and ABEEM SOLVER showed promising results, especially on a single computer with several processors (compact PVM). The implemented programs are available through the Web page http://ncbr.chemi.muni.cz/~n19n/eem_abeem.
29 CFR 1620.18 - Jobs performed under similar working conditions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 4 2012-07-01 2012-07-01 false Jobs performed under similar working conditions. 1620.18... THE EQUAL PAY ACT § 1620.18 Jobs performed under similar working conditions. (a) In general. In order for the equal pay standard to apply, the jobs are required to be performed under similar working...
29 CFR 1620.18 - Jobs performed under similar working conditions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 4 2014-07-01 2014-07-01 false Jobs performed under similar working conditions. 1620.18... THE EQUAL PAY ACT § 1620.18 Jobs performed under similar working conditions. (a) In general. In order for the equal pay standard to apply, the jobs are required to be performed under similar working...
29 CFR 1620.18 - Jobs performed under similar working conditions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 4 2011-07-01 2011-07-01 false Jobs performed under similar working conditions. 1620.18... THE EQUAL PAY ACT § 1620.18 Jobs performed under similar working conditions. (a) In general. In order for the equal pay standard to apply, the jobs are required to be performed under similar working...
29 CFR 1620.18 - Jobs performed under similar working conditions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 4 2013-07-01 2013-07-01 false Jobs performed under similar working conditions. 1620.18... THE EQUAL PAY ACT § 1620.18 Jobs performed under similar working conditions. (a) In general. In order for the equal pay standard to apply, the jobs are required to be performed under similar working...
29 CFR 1620.18 - Jobs performed under similar working conditions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Jobs performed under similar working conditions. 1620.18... THE EQUAL PAY ACT § 1620.18 Jobs performed under similar working conditions. (a) In general. In order for the equal pay standard to apply, the jobs are required to be performed under similar working...
Code of Federal Regulations, 2010 CFR
2010-01-01
... less than that paid to employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort, and responsibility, and that are performed under similar working conditions. ...
Method for enhancing signals transmitted over optical fibers
Ogle, J.W.; Lyons, P.B.
1981-02-11
A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber is disclosed. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.
Holman, Per Arne; Grepperud, Sverre; Tanum, Lars
2011-03-01
An important objective of many health care systems is to ensure equal access to health care services. One way of achieving this is by having universal coverage (low or absent out-of-pockets payments) combined with tax-financed transfers (block grants) to providers with a catchment area responsibility. However, a precondition for equal access in such systems is that providers have similar capacities -- meaning that budgets must be perfectly adjusted for variations in treatment costs not being under the control of providers (risk adjustment). This study presents a method that can be applied to adjust global budgets for variation in health risks. The method is flexible in the sense that it takes into account the possibility that variation in needs may depend on the degree of rationing in supplying health care services. The information being available from referrals is used to risk-adjust budgets. An expert panel ranks each individual on the basis of need. The ranking is performed according to priority-setting criteria for health care services. In addition, the panel suggests an adequate treatment profile (treatment category and treatment intensity) for each referral reviewed. By coupling the treatment profiles with cost information, risk-adjusted budgets are derived. Only individuals found to have a sufficiently high ranking (degree of need) will impact the derived risk-adjusted formula. The method is applied to four Regional Psychiatric Centers (RPC) supplying (i) outpatient services, (ii) day-patient care, and (iii) inpatient treatment for adults. The budget reallocations needed (positive and negative) to achieve an equal capacity across providers range between 10% and 42% of the current budgets. Our method can identify variations across providers when it comes to actual capacity and suggests budget reallocations that make the capacities to be equal across providers. In the case of the Regional Psychiatric Centers (RPCs) considered in this analysis, significant deviations in capacities are identified across providers and catchment areas. Thus, significant social gains can be gained, in terms of improved equal access, if our methodology is applied to risk adjust global budgets.
Statistical efficiency of adaptive algorithms.
Widrow, Bernard; Kamenetsky, Max
2003-01-01
The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.
Gender (in)equality among employees in elder care: implications for health
2012-01-01
Introduction Gendered practices of working life create gender inequalities through horizontal and vertical gender segregation in work, which may lead to inequalities in health between women and men. Gender equality could therefore be a key element of health equity in working life. Our aim was to analyze what gender (in)equality means for the employees at a woman-dominated workplace and discuss possible implications for health experiences. Methods All caregiving staff at two workplaces in elder care within a municipality in the north of Sweden were invited to participate in the study. Forty-five employees participated, 38 women and 7 men. Seven focus group discussions were performed and led by a moderator. Qualitative content analysis was used to analyze the focus groups. Results We identified two themes. "Advocating gender equality in principle" showed how gender (in)equality was seen as a structural issue not connected to the individual health experiences. "Justifying inequality with individualism" showed how the caregivers focused on personalities and interests as a justification of gender inequalities in work division. The justification of gender inequality resulted in a gendered work division which may be related to health inequalities between women and men. Gender inequalities in work division were primarily understood in terms of personality and interests and not in terms of gender. Conclusion The health experience of the participants was affected by gender (in)equality in terms of a gendered work division. However, the participants did not see the gendered work division as a gender equality issue. Gender perspectives are needed to improve the health of the employees at the workplaces through shifting from individual to structural solutions. A healthy-setting approach considering gender relations is needed to achieve gender equality and fairness in health status between women and men. PMID:22217427
Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation
NASA Astrophysics Data System (ADS)
Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.
2018-04-01
Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.
An anti-barotrauma system for preventing barotrauma during hyperbaric oxygen therapy.
Song, Moon; Hoon, Se Jeon; Shin, Tae Min
2018-01-01
In the present study, a tympanometry-based anti-barotrauma (ABT) device was designed using eardrum admittance measurements to develop an objective method of preventing barotrauma that occurs during hyperbaric oxygen (HBO₂) therapy. The middle ear space requires active equalization, and barotrauma of these tissues during HBO₂therapy constitutes the most common treatment-associated injury. Decongestant nasal sprays and nasal steroids are used, but their efficacy is questionable to prevent middle ear barotrauma (MEB) during HBO₂ treatment. Accordingly, a tympanometry-based ABT device was designed using eardrum admittance measurements to develop an objective method for preventing MEB, which causes pain and injury, and represents one of the principal reasons for patients to stop treatment. This study was conducted to test a novel technology that can be used to measure transmembrane pressures, and provide chamber attendants with real-time feedback regarding the patient's equalization status prior to the onset of pain or injury. Eardrum admittance values were measured according to pressure changes inside a hyperbaric oxygen chamber while the system was fitted to the subject. When the pressure increased to above 200 daPa, eardrum admittance decreased to 16.255% of prepressurization levels. After pressure equalization was achieved, eardrum admittance recovered to 95.595% of prepressurization levels. A one-way repeated measures analysis of variance contrast test was performed on eardrum admittance before pressurization versus during pressurization, and before pressurization versus after pressure equalization. The analysis revealed significant differences at all points during pressurization (P⟨0.001), but no significant difference after pressure equalization was achieved. This ABT device can provide objective feedback reflecting eardrum condition to the patient and the chamber operator during HBO₂ therapy. Copyright© Undersea and Hyperbaric Medical Society.
The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study
Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz
2005-01-01
We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...
ERIC Educational Resources Information Center
Robinson-Cimpian, Joseph P.; Lubienski, Sarah Theule; Ganley, Colleen M.; Copur-Gencturk, Yasemin
2014-01-01
Our target article (Robinson-Cimpian, Lubienski, Ganley, & Copur-Gencturk, 2014) used nationally representative data to examine the development of gender gaps in math achievement. We found that when boys and girls demonstrate equivalent math test performance and are perceived by their teachers to be equally well behaved and engaged with the…
ERIC Educational Resources Information Center
Shah, Sonali; Wallis, Mick; Conor, Fiona; Kiszely, Phillip
2015-01-01
The transfer of disability history research to new generation audiences is crucial to allow lessons from the past to impact the future inclusion and equality agenda. As today's children are the policy makers and the legislators of tomorrow, it is important for them to have opportunities to engage with disability life story narratives to understand…
Ahmadi, Koorosh; Sedaghat, Mohammad; Safdarian, Mahdi; Hashemian, Amir-Masoud; Nezamdoust, Zahra; Vaseie, Mohammad; Rahimi-Movaghar, Vafa
2013-01-01
Since appropriate and time-table methods in trauma care have an important impact on patients'outcome, we evaluated the effect of Advanced Trauma Life Support (ATLS) program on medical interns' performance in simulated trauma patient management. A descriptive and analytical study before and after the training was conducted on 24 randomly selected undergraduate medical interns from Imam Reza Hospital in Mashhad, Iran. On the first day, we assessed interns' clinical knowledge and their practical skill performance in confronting simulated trauma patients. After 2 days of ATLS training, we performed the same study and evaluated their score again on the fourth day. The two findings, pre- and post- ATLS periods, were compared through SPSS version 15.0 software. P values less than 0.05 were considered statistically significant. Our findings showed that interns'ability in all the three tasks improved after the training course. On the fourth day after training, there was a statistically significant increase in interns' clinical knowledge of ATLS procedures, the sequence of procedures and skill performance in trauma situations (P less than 0.001, P equal to 0.016 and P equal to 0.01 respectively). ATLS course has an important role in increasing clinical knowledge and practical skill performance of trauma care in medical interns.
Aranda, A; Bonizzi, P; Karel, J; Peeters, R
2015-08-01
This study performs a comparison between Dower's inverse transform and Frank lead system for Myocardial Infarction (MI) identification. We have selected a set of relevant features for MI detection from the vectorcardiogram and used the lasso method after that to build a model for the Dower's inverse transform and one for the Frank leads system. Then we analyzed the performance between both models on MI detection. The proposed methods have been tested using PhysioNet PTB database that contains 550 records from which 368 are MIs. Two main conclusions are coming from this study. The first one is that Dower's inverse transform performs equally well than Frank leads in identification of MI patients. The second one is that lead positions have a large influence on the accuracy of MI patient identification.
Vertical-cavity surface-emitting lasers come of age
NASA Astrophysics Data System (ADS)
Morgan, Robert A.; Lehman, John A.; Hibbs-Brenner, Mary K.
1996-04-01
This manuscript reviews our efforts in demonstrating state-of-the-art planar, batch-fabricable, high-performance vertical-cavity surface-emitting lasers (VCSELs). All performance requirements for short-haul data communication applications are clearly established. We concentrate on the flexibility of the established proton-implanted AlGaAs-based (emitting near 850 nm) technology platform, focusing on a standard device design. This structure is shown to meet or exceed performance and producibility requirements. These include > 99% device yield across 3-in-dia. metal-organic vapor phase epitaxy (MOVPE)-grown wafers and wavelength operation across a > 100-nm range. Recent progress in device performance [low threshold voltage (Vth equals 1.53 V); threshold current (Ith equals 0.68 mA); continuous wave (CW) power (Pcw equals 59 mW); maximum and minimum CW lasing temperature (T equals 200 degree(s)C, 10 K); and wall-plug efficiencies ((eta) wp equals 28%)] should enable great advances in VCSEL-based technologies. We also discuss the viability of VCSELs in cryogenic and avionic/military environments. Also reviewed is a novel technique, modifying this established platform, to engineer low-threshold, high-speed, single- mode VCSELs.
Xu, Tong; Shikhaliev, Polad M; Berenji, Gholam R; Tehranzadeh, Jamshid; Saremi, Farhood; Molloi, Sabee
2004-04-01
To evaluate the feasibility and performance of an x-ray beam equalization system for chest radiography using anthropomorphic phantoms. Area beam equalization involves the process of the initial unequalized image acquisition, attenuator thickness calculation, mask generation using a 16 x 16 piston array, and final equalized image acquisition. Chest radiographs of three different anthropomorphic phantoms were acquired with no beam equalization and equalization levels of 4.8, 11.3, and 21. Six radiologists evaluated the images by scoring them from 1-5 using 13 different criteria. The dose was calculated using the known attenuator material thickness and the mAs of the x-ray tube. The visibility of anatomic structures in the under-penetrated regions of the chest radiographs was shown to be significantly (P < .01) improved after beam equalization. An equalization level of 4.8 provided most of the improvements with moderate increases in patient dose and tube loading. Higher levels of beam equalization did not show much improvement in the visibility of anatomic structures in the under-penetrated regions. A moderate level of x-ray beam equalization in chest radiography is superior to both conventional radiographs and radiographs with high levels of beam equalization. X-ray beam equalization can significantly improve the visibility of anatomic structures in the under-penetrated regions while maintaining good image quality in the lung region.
Equal Protection and Due Process: Contrasting Methods of Review under Fourteenth Amendment Doctrine.
ERIC Educational Resources Information Center
Hughes, James A.
1979-01-01
Argues that the Court has, at times, confused equal protection and due process methods of review, primarily by employing interest balancing in certain equal protection cases that should have been subjected to due process analysis. Available from Harvard Civil Rights-Civil Liberties Law Review, Harvard Law School, Cambridge, MA 02138; sc $4.00.…
Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing.
He, Chengbing; Jing, Lianyou; Xi, Rui; Li, Qinyuan; Zhang, Qunfei
2017-04-24
Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1 × 10 - 1 and 2 × 10 - 2 for the conventional TR-DFE, and between 1 × 10 - 2 and 1 × 10 - 3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB.
Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing
He, Chengbing; Jing, Lianyou; Xi, Rui; Li, Qinyuan; Zhang, Qunfei
2017-01-01
Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1×10−1 and 2×10−2 for the conventional TR-DFE, and between 1×10−2 and 1×10−3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB. PMID:28441763
Crichton, Gamal; Guo, Yufan; Pyysalo, Sampo; Korhonen, Anna
2018-05-21
Link prediction in biomedical graphs has several important applications including predicting Drug-Target Interactions (DTI), Protein-Protein Interaction (PPI) prediction and Literature-Based Discovery (LBD). It can be done using a classifier to output the probability of link formation between nodes. Recently several works have used neural networks to create node representations which allow rich inputs to neural classifiers. Preliminary works were done on this and report promising results. However they did not use realistic settings like time-slicing, evaluate performances with comprehensive metrics or explain when or why neural network methods outperform. We investigated how inputs from four node representation algorithms affect performance of a neural link predictor on random- and time-sliced biomedical graphs of real-world sizes (∼ 6 million edges) containing information relevant to DTI, PPI and LBD. We compared the performance of the neural link predictor to those of established baselines and report performance across five metrics. In random- and time-sliced experiments when the neural network methods were able to learn good node representations and there was a negligible amount of disconnected nodes, those approaches outperformed the baselines. In the smallest graph (∼ 15,000 edges) and in larger graphs with approximately 14% disconnected nodes, baselines such as Common Neighbours proved a justifiable choice for link prediction. At low recall levels (∼ 0.3) the approaches were mostly equal, but at higher recall levels across all nodes and average performance at individual nodes, neural network approaches were superior. Analysis showed that neural network methods performed well on links between nodes with no previous common neighbours; potentially the most interesting links. Additionally, while neural network methods benefit from large amounts of data, they require considerable amounts of computational resources to utilise them. Our results indicate that when there is enough data for the neural network methods to use and there are a negligible amount of disconnected nodes, those approaches outperform the baselines. At low recall levels the approaches are mostly equal but at higher recall levels and average performance at individual nodes, neural network approaches are superior. Performance at nodes without common neighbours which indicate more unexpected and perhaps more useful links account for this.
Patel, Jayshree; Mulhall, Brian; Wolf, Heinz; Klohr, Steven; Guazzo, Dana Morton
2011-01-01
A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work. A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.
Viterbi equalization for long-distance, high-speed underwater laser communication
NASA Astrophysics Data System (ADS)
Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao
2017-07-01
In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Inquiry to Action: Diagnosing and Addressing Students' Relational Thinking About the Equal Sign
ERIC Educational Resources Information Center
Harbour, Kristin E.; Karp, Karen S.; Lingo, Amy S.
2016-01-01
One area of algebraic thinking essential for students' success is a relational understanding of the equal sign. Research has indicated a positive correlation between students' relational understanding of the equal sign and their equation-solving performance, suggesting that students' early conception of the equal sign may affect their learning and…
48 CFR 11.104 - Use of brand name or equal purchase descriptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Use of brand name or equal....104 Use of brand name or equal purchase descriptions. (a) While the use of performance specifications is preferred to encourage offerors to propose innovative solutions, the use of brand name or equal...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.12 Wage... gender than the other for the performance of equal work, the higher rate serves as a wage standard. When a violation of the Act is established, the higher rate paid for equal work is the standard to which...
Bastviken, David; Tranvik, Lars
2001-01-01
Bacterial biomass production is often estimated from incorporation of radioactively labeled leucine into protein, in both oxic and anoxic waters and sediments. However, the validity of the method in anoxic environments has so far not been tested. We compared the leucine incorporation of bacterial assemblages growing in oxic and anoxic waters from three lakes differing in nutrient and humic contents. The method was modified to avoid O2 contamination by performing the incubation in syringes. Isotope saturation levels in oxic and anoxic waters were determined, and leucine incorporation rates were compared to microscopically observed bacterial growth. Finally, we evaluated the effects of O2 contamination during incubation with leucine, as well as the potential effects of a headspace in the incubation vessel. Isotope saturation occurred at a leucine concentration of above about 50 nM in both oxic and anoxic waters from all three lakes. Leucine incorporation rates were linearly correlated to observed growth, and there was no significant difference between oxic and anoxic conditions. O2 contamination of anoxic water during 1-h incubations with leucine had no detectable impact on the incorporation rate, while a headspace in the incubation vessel caused leucine incorporation to increase in both anoxic and O2-contaminated samples. The results indicate that the leucine incorporation method relates equally to bacterial growth rates under oxic and anoxic conditions and that incubation should be performed without a headspace. PMID:11425702
Optimal quantum networks and one-shot entropies
NASA Astrophysics Data System (ADS)
Chiribella, Giulio; Ebler, Daniel
2016-09-01
We develop a semidefinite programming method for the optimization of quantum networks, including both causal networks and networks with indefinite causal structure. Our method applies to a broad class of performance measures, defined operationally in terms of interative tests set up by a verifier. We show that the optimal performance is equal to a max relative entropy, which quantifies the informativeness of the test. Building on this result, we extend the notion of conditional min-entropy from quantum states to quantum causal networks. The optimization method is illustrated in a number of applications, including the inversion, charge conjugation, and controlization of an unknown unitary dynamics. In the non-causal setting, we show a proof-of-principle application to the maximization of the winning probability in a non-causal quantum game.
Comparison of patient simulation methods used in a physical assessment course.
Grice, Gloria R; Wenger, Philip; Brooks, Natalie; Berry, Tricia M
2013-05-13
To determine whether there is a difference in student pharmacists' learning or satisfaction when standardized patients or manikins are used to teach physical assessment. Third-year student pharmacists were randomized to learn physical assessment (cardiac and pulmonary examinations) using either a standardized patient or a manikin. Performance scores on the final examination and satisfaction with the learning method were compared between groups. Eighty and 74 student pharmacists completed the cardiac and pulmonary examinations, respectively. There was no difference in performance scores between student pharmacists who were trained using manikins vs standardized patients (93.8% vs. 93.5%, p=0.81). Student pharmacists who were trained using manikins indicated that they would have probably learned to perform cardiac and pulmonary examinations better had they been taught using standardized patients (p<0.001) and that they were less satisfied with their method of learning (p=0.04). Training using standardized patients and manikins are equally effective methods of learning physical assessment, but student pharmacists preferred using standardized patients.
Finger vein verification system based on sparse representation.
Xin, Yang; Liu, Zhi; Zhang, Haixia; Zhang, Hong
2012-09-01
Finger vein verification is a promising biometric pattern for personal identification in terms of security and convenience. The recognition performance of this technology heavily relies on the quality of finger vein images and on the recognition algorithm. To achieve efficient recognition performance, a special finger vein imaging device is developed, and a finger vein recognition method based on sparse representation is proposed. The motivation for the proposed method is that finger vein images exhibit a sparse property. In the proposed system, the regions of interest (ROIs) in the finger vein images are segmented and enhanced. Sparse representation and sparsity preserving projection on ROIs are performed to obtain the features. Finally, the features are measured for recognition. An equal error rate of 0.017% was achieved based on the finger vein image database, which contains images that were captured by using the near-IR imaging device that was developed in this study. The experimental results demonstrate that the proposed method is faster and more robust than previous methods.
Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2017-01-01
An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710
Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.
Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B
1995-11-01
Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.
Students’ misconception on equal sign
NASA Astrophysics Data System (ADS)
Kusuma, N. F.; Subanti, S.; Usodo, B.
2018-04-01
Equivalence is a very general relation in mathematics. The focus of this article is narrowed specifically to an equal sign in the context of equations. The equal sign is a symbol of mathematical equivalence. Studies have found that many students do not have a deep understanding of equivalence. Students often misinterpret the equal sign as an operational rather than a symbol of mathematical equivalence. This misinterpretation of the equal sign will be label as a misconception. It is important to discuss and must resolve immediately because it can lead to the problems in students’ understanding. The purpose of this research is to describe students’ misconception about the meaning of equal sign on equal matrices. Descriptive method was used in this study involving five students of Senior High School in Boyolali who were taking Equal Matrices course. The result of this study shows that all of the students had the misconception about the meaning of the equal sign. They interpret the equal sign as an operational symbol rather than a symbol of mathematical equivalence. Students merely solve the problem only single way, which is a computational method, so that students stuck in a monotonous way of thinking and unable to develop their creativity.
Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.
Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang
2016-10-10
In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.
Computer-Aided Evaluation of Blood Vessel Geometry From Acoustic Images.
Lindström, Stefan B; Uhlin, Fredrik; Bjarnegård, Niclas; Gylling, Micael; Nilsson, Kamilla; Svensson, Christina; Yngman-Uhlin, Pia; Länne, Toste
2018-04-01
A method for computer-aided assessment of blood vessel geometries based on shape-fitting algorithms from metric vision was evaluated. Acoustic images of cross sections of the radial artery and cephalic vein were acquired, and medical practitioners used a computer application to measure the wall thickness and nominal diameter of these blood vessels with a caliper method and the shape-fitting method. The methods performed equally well for wall thickness measurements. The shape-fitting method was preferable for measuring the diameter, since it reduced systematic errors by up to 63% in the case of the cephalic vein because of its eccentricity. © 2017 by the American Institute of Ultrasound in Medicine.
ERIC Educational Resources Information Center
Geadelmann, Patricia L.; And Others
Essays concerning multiple aspects of integrating the concept of professional equality between the sexes into the field of sport are presented. The abstract idea of sexual equality is examined, and methods for determining the degree of equality present in given working situations are set forth. A discussion of the laws, enforcing agencies, and…
Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C
2017-10-01
Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3.3] vs 4.5 [4.9], P = .11) increased nonsignificantly in both groups. This study supports the use of a simulation-based method as a supplementary tool to the lecture-based method in the assessment and teaching of fundoscopic examination in neurology residency.
Liquid chromatographic determination of florfenicol in the plasma of multiple species of fish
Vue, C.; Schmidt, L.J.; Stehly, G.R.; Gingerich, W.H.
2002-01-01
A simple method was developed for determining florfenicol concentration in a small volume (250 mul) of plasma from five phylogenetically diverse species of freshwater fish. Florfenicol was isolated from the plasma matrix through C-18 solid-phase extraction and quantified by reversed-phase high-performance liquid chromatography with UV detection. The accuracy (84-104%), precision (%RSDless than or equal to8), and sensitivity (quantitation limit <30 ng/ml) of the method indicate its usefulness for conducting pharmacokinetic studies on a variety of freshwater fish. Published by Elsevier Science B.V.
[Interlaboratory Study on Evaporation Residue Test for Food Contact Products (Report 2)].
Ohno, Hiroyuki; Mutsuga, Motoh; Abe, Tomoyuki; Abe, Yutaka; Amano, Homare; Ishihara, Kinuyo; Ohsaka, Ikue; Ohno, Haruka; Ohno, Yuichiro; Ozaki, Asako; Kakihara, Yoshiteru; Kobayashi, Hisashi; Sakuragi, Hiroshi; Shibata, Hiroshi; Shirono, Katsuhiro; Sekido, Haruko; Takasaka, Noriko; Takenaka, Yu; Tajima, Yoshiyasu; Tanaka, Aoi; Tanaka, Hideyuki; Nakanishi, Toru; Nomura, Chie; Haneishi, Nahoko; Hayakawa, Masato; Miura, Toshihiko; Yamaguchi, Miku; Yamada, Kyohei; Watanabe, Kazunari; Sato, Kyoko
2018-01-01
An interlaboratory study was performed to evaluate the equivalence between an official method and a modified method of evaporation residue test using heptane as a food-simulating solvent for oily or fatty foods, based on the Japanese Food Sanitation Law for food contact products. Twenty-three laboratories participated, and tested the evaporation residues of nine test solutions as blind duplicates. In the official method, heating for evaporation was done with a water bath. In the modified method, a hot plate was used for evaporation, and/or a vacuum concentration procedure was skipped. In most laboratories, the test solutions were heated until just prior to dryness, and then allowed to dry under residual heat. Statistical analysis revealed that there was no significant difference between the two methods. Accordingly, the modified method provides performance equal to the official method, and is available as an alternative method. Furthermore, an interlaboratory study was performed to evaluate and compare two leaching solutions (95% ethanol and isooctane) used as food-simulating solvents for oily or fatty foods in the EU. The results demonstrated that there was no significant difference between heptane and these two leaching solutions.
Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.
Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M
2017-05-15
We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III; Castellino, Craig
1989-01-01
Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselages lines with panels alternately tuned to frequencies above and below the frequency to be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cut off and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations. This program summarizes the work carried out at Duke University during the third semester of a contract supported by the Structural Acoustics Branch at NASA Langley Research Center.
Hossain, Monir; Wright, Steven; Petersen, Laura A
2002-04-01
One way to monitor patient access to emergent health care services is to use patient characteristics to predict arrival time at the hospital after onset of symptoms. This predicted arrival time can then be compared with actual arrival time to allow monitoring of access to services. Predicted arrival time could also be used to estimate potential effects of changes in health care service availability, such as closure of an emergency department or an acute care hospital. Our goal was to determine the best statistical method for prediction of arrival intervals for patients with acute myocardial infarction (AMI) symptoms. We compared the performance of multinomial logistic regression (MLR) and discriminant analysis (DA) models. Models for MLR and DA were developed using a dataset of 3,566 male veterans hospitalized with AMI in 81 VA Medical Centers in 1994-1995 throughout the United States. The dataset was randomly divided into a training set (n = 1,846) and a test set (n = 1,720). Arrival times were grouped into three intervals on the basis of treatment considerations: <6 hours, 6-12 hours, and >12 hours. One model for MLR and two models for DA were developed using the training dataset. One DA model had equal prior probabilities, and one DA model had proportional prior probabilities. Predictive performance of the models was compared using the test (n = 1,720) dataset. Using the test dataset, the proportions of patients in the three arrival time groups were 60.9% for <6 hours, 10.3% for 6-12 hours, and 28.8% for >12 hours after symptom onset. Whereas the overall predictive performance by MLR and DA with proportional priors was higher, the DA models with equal priors performed much better in the smaller groups. Correct classifications were 62.6% by MLR, 62.4% by DA using proportional prior probabilities, and 48.1% using equal prior probabilities of the groups. The misclassifications by MLR for the three groups were 9.5%, 100.0%, 74.2% for each time interval, respectively. Misclassifications by DA models were 9.8%, 100.0%, and 74.4% for the model with proportional priors and 47.6%, 79.5%, and 51.0% for the model with equal priors. The choice of MLR or DA with proportional priors, or DA with equal priors for monitoring time intervals of predicted hospital arrival time for a population should depend on the consequences of misclassification errors.
Chen, Shyi-Ming; Chen, Shen-Wen
2015-03-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.
MRI volumetry of prefrontal cortex
NASA Astrophysics Data System (ADS)
Sheline, Yvette I.; Black, Kevin J.; Lin, Daniel Y.; Pimmel, Joseph; Wang, Po; Haller, John W.; Csernansky, John G.; Gado, Mokhtar; Walkup, Ronald K.; Brunsden, Barry S.; Vannier, Michael W.
1995-05-01
Prefrontal cortex volumetry by brain magnetic resonance (MR) is required to estimate changes postulated to occur in certain psychiatric and neurologic disorders. A semiautomated method with quantitative characterization of its performance is sought to reliably distinguish small prefrontal cortex volume changes within individuals and between groups. Stereological methods were tested by a blinded comparison of measurements applied to 3D MR scans obtained using an MPRAGE protocol. Fixed grid stereologic methods were used to estimate prefrontal cortex volumes on a graphic workstation, after the images are scaled from 16 to 8 bits using a histogram method. In addition images were resliced into coronal sections perpendicular to the bicommissural plane. Prefrontal cortex volumes were defined as all sections of the frontal lobe anterior to the anterior commissure. Ventricular volumes were excluded. Stereological measurement yielded high repeatability and precision, and was time efficient for the raters. The coefficient of error was
40 CFR 141.715 - Microbial toolbox options for meeting Cryptosporidium treatment requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... filter performance 0.5-log credit for combined filter effluent turbidity less than or equal to 0.15 NTU...) Individual filter performance 0.5-log credit (in addition to 0.5-log combined filter performance credit) if individual filter effluent turbidity is less than or equal to 0.15 NTU in at least 95 percent of samples each...
40 CFR 141.715 - Microbial toolbox options for meeting Cryptosporidium treatment requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... filter performance 0.5-log credit for combined filter effluent turbidity less than or equal to 0.15 NTU...) Individual filter performance 0.5-log credit (in addition to 0.5-log combined filter performance credit) if individual filter effluent turbidity is less than or equal to 0.15 NTU in at least 95 percent of samples each...
40 CFR 141.715 - Microbial toolbox options for meeting Cryptosporidium treatment requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... filter performance 0.5-log credit for combined filter effluent turbidity less than or equal to 0.15 NTU...) Individual filter performance 0.5-log credit (in addition to 0.5-log combined filter performance credit) if individual filter effluent turbidity is less than or equal to 0.15 NTU in at least 95 percent of samples each...
DSP+FPGA-based real-time histogram equalization system of infrared image
NASA Astrophysics Data System (ADS)
Gu, Dongsheng; Yang, Nansheng; Pi, Defu; Hua, Min; Shen, Xiaoyan; Zhang, Ruolan
2001-10-01
Histogram Modification is a simple but effective method to enhance an infrared image. There are several methods to equalize an infrared image's histogram due to the different characteristics of the different infrared images, such as the traditional HE (Histogram Equalization) method, and the improved HP (Histogram Projection) and PE (Plateau Equalization) method and so on. If to realize these methods in a single system, the system must have a mass of memory and extremely fast speed. In our system, we introduce a DSP + FPGA based real-time procession technology to do these things together. FPGA is used to realize the common part of these methods while DSP is to do the different part. The choice of methods and the parameter can be input by a keyboard or a computer. By this means, the function of the system is powerful while it is easy to operate and maintain. In this article, we give out the diagram of the system and the soft flow chart of the methods. And at the end of it, we give out the infrared image and its histogram before and after the process of HE method.
Engineering of Droplet Manipulation in Tertiary Junction Microfluidic Channels
2017-06-30
DISTRIBUTION/AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT We have carried out an experimental and...method (LBM). Both the experimental and numerical results showed good agreement and suggested that at higher Re equal to 3, the flow was dominated by...location during grant period. Period of Performance: 06/01/2015 – 11/01/2016 Abstract We have carried out an experimental and in silico
NASA Astrophysics Data System (ADS)
Quang Tran, Danh; Li, Jin; Xuan, Fuzhen; Xiao, Ting
2018-06-01
Dielectric elastomers (DEs) are belonged to a group of polymers which cause a time-dependence deformation due to the effect of viscoelastic. In recent years, viscoelasticity has been accounted in the modeling in order to understand the complete electromechanical behavior of dielectric elastomer actuators (DEAs). In this paper, we investigate the actuation performance of a circular DEA under different equal, un-equal biaxial pre-stretches, based on a nonlinear rheological model. The theoretical results are validated by experiments, which verify the electromechanical constitutive equation of the DEs. The viscoelastic mechanical characteristic is analyzed by modeling simulation analysis and experimental to describe the influence of frequency, voltage, pre-stretch, and waveform on the actuation response of the actuator. Our study indicates that: The DEA with different equal or un-equal biaxial pre-stretches undergoes different actuation performance when subject to high voltage. Under an un-equal biaxial pre-stretch, the DEA deforms unequally and shows different deformation abilities in two directions. The relative creep strain behavior of the DEA due to the effect of viscoelasticity can be reduced by increasing pre-stretch ratio. Higher equal biaxial pre-stretch obtains larger deformation strain, improves actuation response time, and reduces the drifting of the equilibrium position in the dynamic response of the DEA when activated by step and period voltage, while increasing the frequency will inhibit the output stretch amplitude. The results in this paper can provide theoretical guidance and application reference for design and control of the viscoelastic DEAs.
Optical study of Erbium-doped-porous silicon based planar waveguides
NASA Astrophysics Data System (ADS)
Najar, A.; Ajlani, H.; Charrier, J.; Lorrain, N.; Haesaert, S.; Oueslati, M.; Haji, L.
2007-06-01
Planar waveguides were formed from porous silicon layers obtained on P + substrates. These waveguides were then doped by erbium using an electrochemical method. Erbium concentration in the range 2.2-2.5 at% was determined by energy dispersive X-ray (EDX) analysis performed on SEM cross sections. The refractive index of layers was studied before and after doping and thermal treatments. The photoluminescence of Er 3+ ions in the IR range and the decay curve of the 1.53 μm emission peak were studied as a function of the excitation power. The value of excited Er density was equal to 0.07%. Optical loss contributions were analyzed on these waveguides and the losses were equal to 1.1 dB/cm at 1.55 μm after doping.
Split-brain reveals separate but equal self-recognition in the two cerebral hemispheres.
Uddin, Lucina Q; Rayman, Jan; Zaidel, Eran
2005-09-01
To assess the ability of the disconnected cerebral hemispheres to recognize images of the self, a split-brain patient (an individual who underwent complete cerebral commissurotomy to relieve intractable epilepsy) was tested using morphed self-face images presented to one visual hemifield (projecting to one hemisphere) at a time while making "self/other" judgments. The performance of the right and left hemispheres of this patient as assessed by a signal detection method was not significantly different, though a measure of bias did reveal hemispheric differences. The right and left hemispheres of this patient independently and equally possessed the ability to self-recognize, but only the right hemisphere could successfully recognize familiar others. This supports a modular concept of self-recognition and other-recognition, separately present in each cerebral hemisphere.
Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren
2015-06-02
Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p < 0.01). In comparison with equal term weighting, the performance was significantly improved by specific term weighting (p < 0.01) but not by generic term weighting. The clinical terms "Dysplastic nodule", "nodule of liver" and "equal density (isodense) lesion" were found the top three image findings associated with HCC in PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.
NASA Astrophysics Data System (ADS)
Abedi, Maysam
2015-06-01
This reply discusses the results of two previously developed approaches in mineral prospectivity/potential mapping (MPM), i.e., ELECTRE III and PROMETHEE II as well-known methods in multi-criteria decision-making (MCDM) problems. Various geo-data sets are integrated to prepare MPM in which generated maps have acceptable matching with the drilled boreholes. Equal performance of the applied methods is indicated in the studied case. Complementary information of these methods is also provided in order to help interested readers to implement them in MPM process.
Methods of making alkyl esters
Elliott, Brian
2010-08-03
A method comprising contacting an alcohol, a feed comprising one or more glycerides and equal to or greater than 2 wt % of one or more free fatty acids, and a solid acid catalyst, a nanostructured polymer catalyst, or a sulfated zirconia catalyst in one or more reactors, and recovering from the one or more reactors an effluent comprising equal to or greater than about 75 wt % alkyl ester and equal to or less than about 5 wt % glyceride.
NASA Astrophysics Data System (ADS)
Mordon, Serge R.; Schoffs, Michel; Martinot, Veronique L.; Buys, Bruno; Patenotre, Philippe; Lesage, Jean C.; Dhelin, Guy
1998-01-01
The authors reported an original 1.9 micrometer diode laser assisted microvascular anastomosis (LAMA) in human. This technique has been applied in 12 patients during reconstructive surgery for digital replantations (n equals 2), for digital revascularizations (n equals 3) and for free flap transfers (n equals 7). Fourteen end-to-end anastomoses (10 arteries, 4 veins) were performed. LAMA were always performed on vessel which did not impede the chance of success of the surgical procedure in case of thrombosis. LAMA was performed with a 1.9 micrometer diode laser after placement of 2 equidistant stitches. The didoes spot was obtained by means of an optic fiber transmitted to the vessel wall via a pencil size hand piece. The used parameters were as followed: spot size equals 400 micrometer, power equals 70 to 220 mW, time equals 0.7 to 2 seconds, mean fluence equals 115 J/cm2. The mechanism involved is a thermal effect on the collagen of the adventitia and media leading to a phenomena which the authors have termed 'heliofusion.' This preliminary trial has permitted to define the modalities of its use in human. The technique is simple, rapid and easily learned. The equipment is not cumbersome, sterilizable and very ergonomic. LAMA does not replace sutures but is complementary, thanks to a reduction in the number of stitches used and to an access to surgical areas which are not easily accessible. This study must be completed by a larger scale study to confirm this technique and its reliability. Others uses could performed on different tissues such as biliary and urinary track, specially under laparoscopic conditions.
ERIC Educational Resources Information Center
Collins, Helen
This workbook, which is intended as a practical guide for human resource managers, trainers, and others concerned with developing and implementing equal opportunities training programs in British workplaces, examines issues in and methods for equal opportunities training. The introduction gives an overview of current training trends and issues.…
Promoting an Equality Agenda in Adult Literacy Practice Using Non-Text/Creative Methodologies
ERIC Educational Resources Information Center
Mark, Rob
2007-01-01
This paper examines the relationship between literacy, equality and creativity and the relevance for adult literacy practices. It looks in particular at how literacy tutors can use creative non-text methods to promote an understanding of equality in learners' lives. Through an examination of the findings from the Literacy and Equality in Irish…
29 CFR 1620.16 - Jobs requiring equal effort in performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 1620.16 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE... factors which cause mental fatigue and stress, as well as those which alleviate fatigue, are to be... portion of her time to performing fill-in work requiring greater dexterity—such as rearranging displays of...
29 CFR 1620.16 - Jobs requiring equal effort in performance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 1620.16 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE... factors which cause mental fatigue and stress, as well as those which alleviate fatigue, are to be... portion of her time to performing fill-in work requiring greater dexterity—such as rearranging displays of...
29 CFR 1620.16 - Jobs requiring equal effort in performance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 1620.16 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE... factors which cause mental fatigue and stress, as well as those which alleviate fatigue, are to be... portion of her time to performing fill-in work requiring greater dexterity—such as rearranging displays of...
29 CFR 1620.16 - Jobs requiring equal effort in performance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 1620.16 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE... factors which cause mental fatigue and stress, as well as those which alleviate fatigue, are to be... portion of her time to performing fill-in work requiring greater dexterity—such as rearranging displays of...
29 CFR 1620.16 - Jobs requiring equal effort in performance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 1620.16 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE... factors which cause mental fatigue and stress, as well as those which alleviate fatigue, are to be... portion of her time to performing fill-in work requiring greater dexterity—such as rearranging displays of...
Equity, Equal Opportunities, Gender and Organization Performance.
ERIC Educational Resources Information Center
Standing, Hilary; Baume, Elaine
The issues of equity, equal opportunities, gender, and organization performance in the health care sector worldwide was examined. Information was gathered from the available literature and from individuals in 17 countries. The analysis highlighted the facts that employment equity debates and policies refer largely to high-income countries and…
Vadapalli, Sriharsha Babu; Atluri, Kaleswararao; Putcha, Madhu Sudhan; Kondreddi, Sirisha; Kumar, N. Suman; Tadi, Durga Prasad
2016-01-01
Objectives: This in vitro study was designed to compare polyvinyl-siloxane (PVS) monophase and polyether (PE) monophase materials under dry and moist conditions for properties such as surface detail reproduction, dimensional stability, and gypsum compatibility. Materials and Methods: Surface detail reproduction was evaluated using two criteria. Dimensional stability was evaluated according to American Dental Association (ADA) specification no. 19. Gypsum compatibility was assessed by two criteria. All the samples were evaluated, and the data obtained were analyzed by a two-way analysis of variance (ANOVA) and Pearson's Chi-square tests. Results: When surface detail reproduction was evaluated with modification of ADA specification no. 19, both the groups under the two conditions showed no significant difference statistically. When evaluated macroscopically both the groups showed statistically significant difference. Results for dimensional stability showed that the deviation from standard was significant among the two groups, where Aquasil group showed significantly more deviation compared to Impregum group (P < 0.001). Two conditions also showed significant difference, with moist conditions showing significantly more deviation compared to dry condition (P < 0.001). The results of gypsum compatibility when evaluated with modification of ADA specification no. 19 and by giving grades to the casts for both the groups and under two conditions showed no significant difference statistically. Conclusion: Regarding dimensional stability, both impregum and aquasil performed better in dry condition than in moist; impregum performed better than aquasil in both the conditions. When tested for surface detail reproduction according to ADA specification, under dry and moist conditions both of them performed almost equally. When tested according to macroscopic evaluation, impregum and aquasil performed significantly better in dry condition compared to moist condition. In dry condition, both the materials performed almost equally. In moist condition, aquasil performed significantly better than impregum. Regarding gypsum compatibility according to ADA specification, in dry condition both the materials performed almost equally, and in moist condition aquasil performed better than impregum. When tested by macroscopic evaluation, impregum performed better than aquasil in both the conditions. PMID:27583217
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
De Geyter, Deborah; Cnudde, Danny; Van der Beken, Mieke; Autaers, Dorien; Piérard, Denis
2018-04-01
The purpose of this study was to test a newly developed decontamination and fluidization kit for processing respiratory specimens for the detection of mycobacteria: the Myco-TB procedure (developed by Copan (Brescia, Italy)). This technique was compared with the Zephiran decontamination method in use in our hospital. Respiratory specimens (n = 387: 130 endotracheal/bronchial aspirates, 172 bronchoalveolar lavages and 55 sputa) submitted to the University Hospital of Brussels between January 2016 and March 2017 were included. All samples were divided into two aliquots: one was subjected to the Myco-TB method and one to the Zephiran technique prior to culture. The sensitivities for culture for the Zephiran technique on solid media, the Myco-TB method on solid media and Myco-TB combined with the MGIT™ system were respectively 67%, 87% and 89%. The contamination rates were 22% with both the Zephiran and Myco-TB method on solid media and only 4% with the Myco-TB kit combined with the MGIT™ system. For direct microscopy, the sensitivities of the Zephiran method and the Myco-TB method were equal (40%) when the centrifugation time was 20 min. The Myco-TB decontamination method is easy and rapid to perform. It is more sensitive for culture as compared to the Zephiran method and gives lower contamination levels when combined with the MGIT™ technique. When increasing the centrifugation step to 20 min, the sensitivity of direct microscopy is equal to the Zephiran method.
Performance of Koyna dam based on static and dynamic analysis
NASA Astrophysics Data System (ADS)
Azizan, Nik Zainab Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar
2017-10-01
This paper discusses the performance of Koyna dam based on static pushover analysis (SPO) and incremental dynamic analysis (IDA). The SPO in this study considered two type of lateral load which is inertial load and hydrodynamic load. The structure was analyse until the damage appears on the structure body. The IDA curves were develop based on 7 ground motion, where the characteristic of the ground motions: i) the distance from the epicenter is less than 15km, (ii) the magnitude is equal to or greater than 5.5 and (iii) the PGA is equal to or greater than 0.15g. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. Elastic respond spectrum developed based on soil type B by using Eurocode 8. By using SPO and IDA method are able to determine the limit states of the dam. The limit state proposed in this study are yielding and ultimate state which is identified base on crack pattern perform on the structure model. The comparison of maximum crest displacement for both methods is analysed to define the limit state of the dam. The displacement of yielding state for Koyna dam is 23.84mm and 44.91mm for the ultimate state. The results are able to be used as a guideline to monitor Koyna dam under seismic loadings which are considering static and dynamic.
NASA Astrophysics Data System (ADS)
Myint, L. M. M.; Warisarn, C.
2017-05-01
Two-dimensional (2-D) interference is one of the prominent challenges in ultra-high density recording system such as bit patterned media recording (BPMR). The multi-track joint 2-D detection technique with the help of the array-head reading can tackle this problem effectively by jointly processing the multiple readback signals from the adjacent tracks. Moreover, it can robustly alleviate the impairments due to track mis-registration (TMR) and media noise. However, the computational complexity of such detectors is normally too high and hard to implement in a reality, even for a few multiple tracks. Therefore, in this paper, we mainly focus on reducing the complexity of multi-track joint 2-D Viterbi detector without paying a large penalty in terms of the performance. We propose a simplified multi-track joint 2-D Viterbi detector with a manageable complexity level for the BPMR's multi-track multi-head (MTMH) system. In the proposed method, the complexity of detector's trellis is reduced with the help of the joint-track equalization method which employs 1-D equalizers and 2-D generalized partial response (GPR) target. Moreover, we also examine the performance of a full-fledged multi-track joint 2-D detector and the conventional 2-D detection. The results show that the simplified detector can perform close to the full-fledge detector, especially when the system faces high media noise, with the significant low complexity.
2011-01-01
Background Men and women have different patterns of health. These differences between the sexes present a challenge to the field of public health. The question why women experience more health problems than men despite their longevity has been discussed extensively, with both social and biological theories being offered as plausible explanations. In this article, we focus on how gender equality in a partnership might be associated with the respondents' perceptions of health. Methods This study was a cross-sectional survey with 1400 respondents. We measured gender equality using two different measures: 1) a self-reported gender equality index, and 2) a self-perceived gender equality question. The aim of comparison of the self-reported gender equality index with the self-perceived gender equality question was to reveal possible disagreements between the normative discourse on gender equality and daily practice in couple relationships. We then evaluated the association with health, measured as self-rated health (SRH). With SRH dichotomized into 'good' and 'poor', logistic regression was used to assess factors associated with the outcome. For the comparison between the self-reported gender equality index and self-perceived gender equality, kappa statistics were used. Results Associations between gender equality and health found in this study vary with the type of gender equality measurement. Overall, we found little agreement between the self-reported gender equality index and self-perceived gender equality. Further, the patterns of agreement between self-perceived and self-reported gender equality were quite different for men and women: men perceived greater gender equality than they reported in the index, while women perceived less gender equality than they reported. The associations to health were depending on gender equality measurement used. Conclusions Men and women perceive and report gender equality differently. This means that it is necessary not only to be conscious of the methods and measurements used to quantify men's and women's opinions of gender equality, but also to be aware of the implications for health outcomes. PMID:21871087
NASA Astrophysics Data System (ADS)
Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al
2018-03-01
One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.
Resistivity Measurement by Dual-Configuration Four-Probe Method
NASA Astrophysics Data System (ADS)
Yamashita, Masato; Nishii, Toshifumi; Mizutani, Hiroya
2003-02-01
The American Society for Testing and Materials (ASTM) Committee has published a new technique for the measurement of resistivity which is termed the dual-configuration four-probe method. The resistivity correction factor is the function of only the data which are obtained from two different electrical configurations of the four probes. The measurement of resistivity and sheet resistance are performed for graphite rectangular plates and indium tin oxide (ITO) films by the conventional four-probe method and the dual-configuration four-probe method. It is demonstrated that the dual-configuration four-probe method which includes a probe array with equal separations of 10 mm can be applied to specimens having thicknesses up to 3.7 mm if a relative resistivity difference up to 5% is allowed.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000
The covariance matrix for the solution vector of an equality-constrained least-squares problem
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1976-01-01
Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'
Mirzadeh, S.; Lambrecht, R.M.
1985-07-01
The invention relates to a practical method for commercially producing radiopharmaceutical activities and, more particularly, relates to a method for the preparation of about equal amount of Radon-211 (/sup 211/Rn) and Xenon-125 (/sup 125/Xe) including a one-step chemical procedure following an irradiation procedure in which a selected target of Thorium (/sup 232/Th) or Uranium (/sup 238/U) is irradiated. The disclosed method is also effective for the preparation in a one-step chemical procedure of substantially equal amounts of high purity /sup 123/I and /sup 211/At. In one preferred arrangement of the invention almost equal quantities of /sup 211/Rn and /sup 125/Xe are prepared using a onestep chemical procedure in which a suitably irradiated fertile target material, such as thorium-232 or uranium-238, is treated to extract those radionuclides from it. In the same one-step chemical procedure about equal quantities of /sup 211/At and /sup 123/I are prepared and stored for subsequent use. In a modified arrangement of the method of the invention, it is practiced to separate and store about equal amounts of only /sup 211/Rn and /sup 125/Xe, while preventing the extraction or storage of the radionuclides /sup 211/At and /sup 123/I.
Gibbs, B F; Alli, I; Mulligan, C N
1996-02-23
A method for the determination of aspartame (N-L-alpha-aspartyl-L-phenylalanine methyl ester) and its metabolites, applicable on a routine quality assurance basis, is described. Liquid samples (diet Coke, 7-Up, Pepsi, etc.) were injected directly onto a mini-cartridge reversed-phase column on a high-performance liquid chromatographic system, whereas solid samples (Equal, hot chocolate powder, pudding, etc.) were extracted with water. Optimising chromatographic conditions resulted in resolved components of interest within 12 min. The by-products were confirmed by mass spectrometry. Although the method was developed on a two-pump HPLC system fitted with a diode-array detector, it is straightforward and can be transformed to the simplest HPLC configuration. Using a single-piston pump (with damper), a fixed-wavelength detector and a recorder/integrator, the degradation of products can be monitored as they decompose. The results obtained were in harmony with previously reported tedious methods. The method is simple, rapid, quantitative and does not involve complex, hazardous or toxic chemistry.
Semiblind channel estimation for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
Chen, Yi-Sheng; Song, Jyu-Han
2012-12-01
This article proposes a semiblind channel estimation method for multiple-input multiple-output orthogonal frequency-division multiplexing systems based on circular precoding. Relying on the precoding scheme at the transmitters, the autocorrelation matrix of the received data induces a structure relating the outer product of the channel frequency response matrix and precoding coefficients. This structure makes it possible to extract information about channel product matrices, which can be used to form a Hermitian matrix whose positive eigenvalues and corresponding eigenvectors yield the channel impulse response matrix. This article also tests the resistance of the precoding design to finite-sample estimation errors, and explores the effects of the precoding scheme on channel equalization by performing pairwise error probability analysis. The proposed method is immune to channel zero locations, and is reasonably robust to channel order overestimation. The proposed method is applicable to the scenarios in which the number of transmitters exceeds that of the receivers. Simulation results demonstrate the performance of the proposed method and compare it with some existing methods.
Development of a breadboard design of a high-performance, high-reliability switching regulator
NASA Technical Reports Server (NTRS)
Lindena, S. J.
1975-01-01
A comparison of two potential conversion methods, the series inverter and the inductive energy transfer (IET) conversion technique, is presented. The investigations showed that a characteristic of the series inverter circuit (high equalizing current values in each half cycle) could not be accomplished with available components, and the investigations continued with the IET circuit only. An IET circuit system was built with the use of computer-aided design in a 2, 4, and 8 stage configuration, and these stages were staggered 180, 90, and 45 degrees, respectively. All stages were pulsewidth modulated to regulate over an input voltage range from 200 to 400 volts dc at a regulated output voltage of 56 volts. The output power capability was 100 to 500 watts for the 2 and 8 stage configuration and 50 to 250 watts for the 4 stage configuration. Equal control of up to eight 45 degree staggered stages was accomplished through the use of a digital-to-analog control circuit. Equal power sharing of all stages was achieved through a new technique using an inductively coupled balancing circuit. Conclusions are listed.
Low complexity adaptive equalizers for underwater acoustic communications
NASA Astrophysics Data System (ADS)
Soflaei, Masoumeh; Azmi, Paeiz
2014-08-01
Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.
Jing, Wei; Thompson, Joseph J; Jacobs, Wesley A; Salvati, Louis M
2015-01-01
A single-laboratory validation (SLV) has been performed for a method that simultaneously determines choline and carnitine in nutritional products by ultra performance LC (UPLC)/MS/MS. All 11 matrixes from the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals (SPIFAN) were tested. Depending on the sample preparation, either the added (free, with a water dilution and filtering) or total (after microwave digestion at 120°C in nitric acid and subsequent neutralization with ammonia) species can be detected. For nonmilk containing products, the total carnitine is almost always equal to the free carnitine. A substantial difference was noted between free and total choline in all products. All Standard Method Performance Requirements for carnitine and choline have been met. This report summarizes the material sent to the AOAC Expert Review Panel for SPIFAN nutrient methods for the review of this method, as well as some additional data from an internal validation. The method was granted AOAC First Action status for carnitine in 2014 (2014.04), but the choline data are also being presented. A comparison of choline results to those from other AOAC methods is given.
NASA Astrophysics Data System (ADS)
Hester, Michael Wayne
Nanotechnology offers significant opportunities in providing solutions to existing engineering problems as well as breakthroughs in new fields of science and technology. In order to fully realize benefits from such initiatives, nanomanufacturing methods must be developed to integrate enabling constructs into commercial mainstream. Even though significant advances have been made, widespread industrialization in many areas remains limited. Manufacturing methods, therefore, must continually be developed to bridge gaps between nanoscience discovery and commercialization. A promising technology for integration of top-down nanomanufacturing yet to receive full industrialization is equal channel angular pressing, a process transforming metallic materials into nanostructured or ultra-fine grained materials with significantly improved performance characteristics. To bridge the gap between process potential and actual manufacturing output, a prototype top-down nanomanufacturing system identified as indexing equal channel angular pressing (IX-ECAP) was developed. The unit was designed to capitalize on opportunities of transforming spent or scrap engineering elements into key engineering commodities. A manufacturing system was constructed to impose severe plastic deformation via simple shear in an equal channel angular pressing die on 1100 and 4043 aluminum welding rods. 1/4 fraction factorial split-plot experiments assessed significance of five predictors on the response, microhardness, for the 4043 alloy. Predictor variables included temperature, number of passes, pressing speed, back pressure, and vibration. Main effects were studied employing a resolution III design. Multiple linear regression was used for model development. Initial studies were performed using continuous processing followed by contingency designs involving discrete variable length work pieces. IX-ECAP offered a viable solution in severe plastic deformation processing. Discrete variable length work piece pressing proved very successful. With three passes through the system, 4043 processed material experienced an 88.88% increase in microhardness, 203.4% increase in converted yield strength, and a 98.5% reduction in theoretical final grain size to 103 nanometers using the Hall-Petch relation. The process factor, number of passes, was statistically significant at the 95% confidence level; whereas, temperature was significant at the 90% confidence level. Limitations of system components precluded completion of studies involving continuous pressing. Proposed system redesigns, however, will ensure mainstream commercialization of continuous length work piece processing.
Koike-Akino, Toshiaki; Duan, Chunjie; Parsons, Kieran; Kojima, Keisuke; Yoshida, Tsuyoshi; Sugihara, Takashi; Mizuochi, Takashi
2012-07-02
Fiber nonlinearity has become a major limiting factor to realize ultra-high-speed optical communications. We propose a fractionally-spaced equalizer which exploits a trained high-order statistics to deal with data-pattern dependent nonlinear impairments in fiber-optic communications. The computer simulation reveals that the proposed 3-tap equalizer improves Q-factor by more than 2 dB for long-haul transmissions of 5,230 km distance and 40 Gbps data rate. We also demonstrate that the joint use of a digital backpropagation (DBP) and the proposed equalizer offers an additional 1-2 dB performance improvement due to the channel shortening gain. A performance in high-speed transmissions of 100 Gbps and beyond is evaluated as well.
Desselle, Shane P; Vaughan, Melissa; Faria, Thomas
2002-01-01
To design a highly quantitative template for the evaluation of community pharmacy technicians' job performance that enables managers to provide sufficient feedback and fairly allocate organizational rewards. Two rounds of interviews with two convenience samples of community pharmacists and pharmacy technicians were conducted. The interview in phase 1 was qualitative, and responses were used to design the second interview protocol. During the phase 2 interviews, a new group of respondents ranked technicians' job responsibilities, identified through the initial interviewees' responses, using scales the researchers had designed using an interval-level scaling technique called equal-appearing intervals. Chain and independent pharmacies. Phase 1-20 pharmacists and 20 technicians from chain and independent pharmacies; phase 2-20 pharmacists and 9 technicians from chain and independent pharmacies. Ratings of the importance of technician practice functions and corresponding responsibilities. Weights were calculated for each practice function. A weighted list of practice functions was developed, and this may serve as a performance evaluation template. Customer service-related activities were judged by pharmacists and technicians alike to be the most important technician functions. Many pharmacies either lack formal performance appraisal systems or fail to implement them properly. Technicians may desire more consistent feedback from pharmacists and value information that may lead to organizational rewards. Using a weighted, behaviorally anchored performance appraisal system may help pharmacists and pharmacy managers meet these demands.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
Are Disadvantaged Students Given Equal Opportunities to Learn Mathematics? PISA in Focus. No. 63
ERIC Educational Resources Information Center
OECD Publishing, 2016
2016-01-01
Socio-economically advantaged and disadvantaged students are not equally exposed to mathematics problems and concepts at school. Exposure to mathematics at school has an impact on performance, and disadvantaged students' relative lack of familiarity with mathematics partly explains their lower performance. Widening access to mathematics content…
Ferry, Barbara; Gifu, Elena-Patricia; Sandu, Ioana; Denoroy, Luc; Parrot, Sandrine
2014-03-01
Electrochemical methods are very often used to detect catecholamine and indolamine neurotransmitters separated by conventional reverse-phase high performance liquid chromatography (HPLC). The present paper presents the development of a chromatographic method to detect monoamines present in low-volume brain dialysis samples using a capillary column filled with sub-2μm particles. Several parameters (repeatability, linearity, accuracy, limit of detection) for this new ultrahigh performance liquid chromatography (UHPLC) method with electrochemical detection were examined after optimization of the analytical conditions. Noradrenaline, adrenaline, serotonin, dopamine and its metabolite 3-methoxytyramine were separated in 1μL of injected sample volume; they were detected above concentrations of 0.5-1nmol/L, with 2.1-9.5% accuracy and intra-assay repeatability equal to or less than 6%. The final method was applied to very low volume dialysates from rat brain containing monoamine traces. The study demonstrates that capillary UHPLC with electrochemical detection is suitable for monitoring dialysate monoamines collected at high sampling rate. Copyright © 2014 Elsevier B.V. All rights reserved.
All Plasma Products Are Not Created Equal: Characterizing Differences Between Plasma Products
2015-06-01
2011;6(4):e18812. 24. Chandler WL. Microparticle counts in platelet - rich and platelet -free plasma , effect of centrifugation and sample-processing protocols...used throughout the article for this product. Laboratory Methods Platelet -Poor Plasma Preparation Platelet -poor plasma (PPP) was prepared by centrifuga... platelets , respectively. Flow cytometry was performed as described by Matijevic et al.4 Briefly, 10 KL of each plasma product was incubated with
ERIC Educational Resources Information Center
Jensen, Arthur R.
1975-01-01
Some of the key problems of educational equality -- equality of opportunities and inequality of performance; individual differences vs. group differences, coping with group inequality -- are made explicit. (Author/KM)
Kalman Filtering Approach to Blind Equalization
1993-12-01
NAVAL POSTGRADUATE SCHOOL Monterey, California •GR AD13 DTIC 94-07381 AR 0C199 THESIS S 0 LECTE4u KALMAN FILTERING APPROACH TO BLIND EQUALIZATION by...FILTERING APPROACH 5. FUNDING NUMBERS TO BLIND EQUALIZATION S. AUTHOR(S) Mehmet Kutlu 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) S...which introduces errors due to intersymbol interference. The solution to this problem is provided by equalizers which use a training sequence to adapt to
Feedforward Equalizers for MDM-WDM in Multimode Fiber Interconnects
NASA Astrophysics Data System (ADS)
Masunda, Tendai; Amphawan, Angela
2018-04-01
In this paper, we present new tap configurations of a feedforward equalizer to mitigate mode coupling in a 60-Gbps 18-channel mode-wavelength division multiplexing system in a 2.5-km-long multimode fiber. The performance of the equalization is measured through analyses on eye diagrams, power coupling coefficients and bit-error rates.
Implementation of Insight Responsibilities in Process Engineering
NASA Technical Reports Server (NTRS)
Osborne, Deborah M.
1997-01-01
This report describes an approach for evaluating flight readiness (COFR) and contractor performance evaluation (award fee) as part of the insight role of NASA Process Engineering at Kennedy Space Center. Several evaluation methods are presented, including systems engineering evaluations and use of systems performance data. The transition from an oversight function to the insight function is described. The types of analytical tools appropriate for achieving the flight readiness and contractor performance evaluation goals are described and examples are provided. Special emphasis is placed upon short and small run statistical quality control techniques. Training requirements for system engineers are delineated. The approach described herein would be equally appropriate in other directorates at Kennedy Space Center.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
Adaptive frequency-domain equalization in digital coherent optical receivers.
Faruk, Md Saifuddin; Kikuchi, Kazuro
2011-06-20
We propose a novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain adaptive equalizer based on finite-impulse-response (FIR) filters. The proposed equalizer can operate on the input sequence sampled by free-running analog-to-digital converters (ADCs) at the rate of two samples per symbol; therefore, the arbitrary initial sampling phase of ADCs can be adjusted so that the best symbol-spaced sequence is produced. The equalizer can also be configured in the butterfly structure, which enables demultiplexing of polarization tributaries apart from equalization of linear transmission impairments. The performance of the proposed equalization scheme is verified by 40-Gbits/s dual-polarization quadrature phase-shift keying (QPSK) transmission experiments.
EQUALS Investigations: Remote Rulers.
ERIC Educational Resources Information Center
Mayfield, Karen; Whitlow, Robert
EQUALS is a teacher education program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. It supports a problem-solving approach to mathematics which has students working in groups, uses active assessment methods, and incorporates a broad mathematics curriculum…
Early Understanding of Equality
ERIC Educational Resources Information Center
Leavy, Aisling; Hourigan, Mairéad; McMahon, Áine
2013-01-01
Quite a bit of the arithmetic in elementary school contains elements of algebraic reasoning. After researching and testing a number of instructional strategies with Irish third graders, these authors found effective methods for cultivating a relational concept of equality in third-grade students. Understanding equality is fundamental to algebraic…
Equalizer system and method for series connected energy storing devices
Rouillard, Jean; Comte, Christophe; Hagen, Ronald A.; Knudson, Orlin B.; Morin, Andre; Ross, Guy
1999-01-01
An apparatus and method for regulating the charge voltage of a number of electrochemical cells connected in series is disclosed. Equalization circuitry is provided to control the amount of charge current supplied to individual electrochemical cells included within the series string of electrochemical cells without interrupting the flow of charge current through the series string. The equalization circuitry balances the potential of each of the electrochemical cells to within a pre-determined voltage setpoint tolerance during charging, and, if necessary, prior to initiating charging. Equalization of cell potentials may be effected toward the end of a charge cycle or throughout the charge cycle. Overcharge protection is also provided for each of the electrochemical cells coupled to the series connection. During a discharge mode of operation in accordance with one embodiment, the equalization circuitry is substantially non-conductive with respect to the flow of discharge current from the series string of electrochemical cells. In accordance with another embodiment, equalization of the series string of cells is effected during a discharge cycle.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
NASA Astrophysics Data System (ADS)
Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.
2017-06-01
Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.
Precision of proportion estimation with binary compressed Raman spectrum.
Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric
2018-01-01
The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.
NASA Astrophysics Data System (ADS)
Hofmann, Ingo
2013-04-01
Using laser accelerated protons or ions for various applications—for example in particle therapy or short-pulse radiographic diagnostics—requires an effective method of focusing and energy selection. We derive an analytical scaling for the performance of a solenoid compared with a doublet/triplet as function of the energy, which is confirmed by TRACEWIN simulations. Generally speaking, the two approaches are equivalent in focusing capability, if parameters are such that the solenoid length approximately equals its diameter. The scaling also shows that this is usually not the case above a few MeV; consequently, a solenoid needs to be pulsed or superconducting, whereas the quadrupoles can remain conventional. It is also important that the transmission of the triplet is found only 25% lower than that of the equivalent solenoid. Both systems are equally suitable for energy selection based on their chromatic effect as is shown using an initial distribution following the RPA simulation model by Yan et al. [Phys. Rev. Lett. 103, 135001 (2009PRLTAO0031-900710.1103/PhysRevLett.103.135001].
Comparing Performances of Multiple Comparison Methods in Commonly Used 2 × C Contingency Tables.
Cangur, Sengul; Ankarali, Handan; Pasin, Ozge
2016-12-01
This study aims at mentioning briefly multiple comparison methods such as Bonferroni, Holm-Bonferroni, Hochberg, Hommel, Marascuilo, Tukey, Benjamini-Hochberg and Gavrilov-Benjamini-Sarkar for contingency tables, through the data obtained from a medical research and examining their performances by simulation study which was constructed as the total 36 scenarios to 2 × 4 contingency table. As results of simulation, it was observed that when the sample size is more than 100, the methods which can preserve the nominal alpha level are Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Marascuilo method was found to be a more conservative than Bonferroni. It was found that Type I error rate for Hommel method is around 2 % in all scenarios. Moreover, when the proportions of the three populations are equal and the proportion value of the fourth population is far at a level of ±3 standard deviation from the other populations, the power value for Unadjusted All-Pairwise Comparison approach is at least a bit higher than the ones obtained by Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Consequently, Gavrilov-Benjamini-Sarkar and Holm-Bonferroni methods have the best performance according to simulation. Hommel and Marascuilo methods are not recommended to be used because they have medium or lower performance. In addition, we have written a Minitab macro about multiple comparisons for use in scientific research.
A joint equalization algorithm in high speed communication systems
NASA Astrophysics Data System (ADS)
Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin
2018-02-01
This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.
Code of Federal Regulations, 2010 CFR
2010-07-01
... stationary RICE with a site rating of greater than or equal to 250 and less than or equal to 500 brake HP... demonstrations if I own or operate a 4SLB SI stationary RICE with a site rating of greater than or equal to 250... operate a new or reconstructed 4SLB stationary RICE with a site rating of greater than or equal to 250 and...
EQUALS Investigations: Telling Someone Where To Go.
ERIC Educational Resources Information Center
Mayfield, Karen; Whitlow, Robert
EQUALS is a teacher education program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. It supports a problem-solving approach to mathematics which has students working in groups, uses active assessment methods, and incorporates a broad mathematics curriculum…
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Wan Ismail, W Z; Sim, K S; Tso, C P; Ting, H Y
2011-01-01
To reduce undesirable charging effects in scanning electron microscope images, Rayleigh contrast stretching is developed and employed. First, re-scaling is performed on the input image histograms with Rayleigh algorithm. Then, contrast stretching or contrast adjustment is implemented to improve the images while reducing the contrast charging artifacts. This technique has been compared to some existing histogram equalization (HE) extension techniques: recursive sub-image HE, contrast stretching dynamic HE, multipeak HE and recursive mean separate HE. Other post processing methods, such as wavelet approach, spatial filtering, and exponential contrast stretching, are compared as well. Overall, the proposed method produces better image compensation in reducing charging artifacts. Copyright © 2011 Wiley Periodicals, Inc.
A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.
Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei
2014-10-01
Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.
76 FR 63616 - SES Performance Review Board; Appointment of Members
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-13
... FURTHER INFORMATION CONTACT: Lisa M. Williams, Chief Human Capital Officer, U.S. Equal Employment..., Associate Commissioner for the Office of Civil Rights and Equal Opportunities, Social Security...
77 FR 63313 - SES Performance Review Board; Appointment of Members
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-16
... FURTHER INFORMATION CONTACT: Lisa M. Williams, Chief Human Capital Officer, U.S. Equal Employment.... Berry, Director, New York District Office, Equal Employment Opportunity Commission; Ms. Katherine E...
24 CFR 570.904 - Equal opportunity and fair housing review criteria.
Code of Federal Regulations, 2014 CFR
2014-04-01
... person within the meaning of section 109. The extent to which persons of a particular race, gender, or... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Equal opportunity and fair housing... Performance Reviews § 570.904 Equal opportunity and fair housing review criteria. (a) General. (1) Where the...
24 CFR 570.904 - Equal opportunity and fair housing review criteria.
Code of Federal Regulations, 2012 CFR
2012-04-01
... person within the meaning of section 109. The extent to which persons of a particular race, gender, or... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Equal opportunity and fair housing... Performance Reviews § 570.904 Equal opportunity and fair housing review criteria. (a) General. (1) Where the...
24 CFR 570.904 - Equal opportunity and fair housing review criteria.
Code of Federal Regulations, 2013 CFR
2013-04-01
... person within the meaning of section 109. The extent to which persons of a particular race, gender, or... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Equal opportunity and fair housing... Performance Reviews § 570.904 Equal opportunity and fair housing review criteria. (a) General. (1) Where the...
29 CFR 1620.13 - “Equal Work”-What it means.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sex in the wages paid for “equal work on jobs the performance of which requires equal skill, effort... practices indicate a pay practice of discrimination based on sex. It should also be noted that it is an... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...
29 CFR 1620.13 - “Equal Work”-What it means.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sex in the wages paid for “equal work on jobs the performance of which requires equal skill, effort... practices indicate a pay practice of discrimination based on sex. It should also be noted that it is an... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...
Quality and Equality: The Mask of Discursive Conflation in Education Policy Texts
ERIC Educational Resources Information Center
Gillies, Donald
2008-01-01
Two key themes of recent UK education policy texts have been a focus on "quality" in public sector performance, and on "equality" in the form of New Labour's stated commitment to equality of opportunity as a key policy objective. This twin approach can be seen at its most obvious in the concept of "excellence for…
2009-01-01
Objectives. We sought to address denominator neglect (i.e. the focus on the number of treated and nontreated patients who died, without sufficiently considering the overall numbers of patients) in estimates of treatment risk reduction, and analyzed whether icon arrays aid comprehension. Methods. We performed a survey of probabilistic, national samples in the United States and Germany in July and August of 2008. Participants received scenarios involving equally effective treatments but differing in the overall number of treated and nontreated patients. In some conditions, the number who received a treatment equaled the number who did not; in others the number was smaller or larger. Some participants received icon arrays. Results. Participants—particularly those with low numeracy skills—showed denominator neglect in treatment risk reduction perceptions. Icon arrays were an effective method for eliminating denominator neglect. We found cross-cultural differences that are important in light of the countries' different medical systems. Conclusions. Problems understanding numerical information often reside not in the mind but in the problem's representation. These findings suggest suitable ways to communicate quantitative medical data. PMID:19833983
Detroit's Fight for Equal Educational Opportunity.
ERIC Educational Resources Information Center
Zwerdling, A. L.
To meet the challenge of equal educational opportunity, current methods of public school finance must be revised. The present financial system, based on State equalization of local property tax valuation, is inequitable since it results in many school districts, particularly those in large cities, having inadequate resources to meet extraordinary…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Panbao; Lu, Xiaonan; Yang, Xu
This paper proposes an improved distributed secondary control scheme for dc microgrids (MGs), aiming at overcoming the drawbacks of conventional droop control method. The proposed secondary control scheme can remove the dc voltage deviation and improve the current sharing accuracy by using voltage-shifting and slope-adjusting approaches simultaneously. Meanwhile, the average value of droop coefficients is calculated, and then it is controlled by an additional controller included in the distributed secondary control layer to ensure that each droop coefficient converges at a reasonable value. Hence, by adjusting the droop coefficient, each participating converter has equal output impedance, and the accurate proportionalmore » load current sharing can be achieved with different line resistances. Furthermore, the current sharing performance in steady and transient states can be enhanced by using the proposed method. The effectiveness of the proposed method is verified by detailed experimental tests based on a 3 × 1 kW prototype with three interface converters.« less
Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette
2017-12-01
J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.
Gender Differences in Sustained Attentional Control Relate to Gender Inequality across Countries
Riley, Elizabeth; Okabe, Hidefusa; Germine, Laura; Wilmer, Jeremy; Esterman, Michael; DeGutis, Joseph
2016-01-01
Sustained attentional control is critical for everyday tasks and success in school and employment. Understanding gender differences in sustained attentional control, and their potential sources, is an important goal of psychology and neuroscience and of great relevance to society. We used a large web-based sample (n = 21,484, from testmybrain.org) to examine gender differences in sustained attentional control. Our sample included participants from 41 countries, allowing us to examine how gender differences in each country relate to national indices of gender equality. We found significant gender differences in certain aspects of sustained attentional control. Using indices of gender equality, we found that overall sustained attentional control performance was lower in countries with less equality and that there were greater gender differences in performance in countries with less equality. These findings suggest that creating sociocultural conditions which value women and men equally can improve a component of sustained attention and reduce gender disparities in cognition. PMID:27802294
Gender Differences in Sustained Attentional Control Relate to Gender Inequality across Countries.
Riley, Elizabeth; Okabe, Hidefusa; Germine, Laura; Wilmer, Jeremy; Esterman, Michael; DeGutis, Joseph
2016-01-01
Sustained attentional control is critical for everyday tasks and success in school and employment. Understanding gender differences in sustained attentional control, and their potential sources, is an important goal of psychology and neuroscience and of great relevance to society. We used a large web-based sample (n = 21,484, from testmybrain.org) to examine gender differences in sustained attentional control. Our sample included participants from 41 countries, allowing us to examine how gender differences in each country relate to national indices of gender equality. We found significant gender differences in certain aspects of sustained attentional control. Using indices of gender equality, we found that overall sustained attentional control performance was lower in countries with less equality and that there were greater gender differences in performance in countries with less equality. These findings suggest that creating sociocultural conditions which value women and men equally can improve a component of sustained attention and reduce gender disparities in cognition.
Mousa-Pasandi, Mohammad E; Plant, David V
2010-09-27
We report and investigate the feasibility of zero-overhead laser phase noise compensation (PNC) for long-haul coherent optical orthogonal frequency division multiplexing (CO-OFDM) transmission systems, using the decision-directed phase equalizer (DDPE). DDPE updates the equalization parameters on a symbol-by-symbol basis after an initial decision making stage and retrieves an estimation of the phase noise value by extracting and averaging the phase drift of all OFDM sub-channels. Subsequently, a second equalization is performed by using the estimated phase noise value which is followed by a final decision making stage. We numerically compare the performance of DDPE and the CO-OFDM conventional equalizer (CE) for different laser linewidth values after transmission over 2000 km of uncompensated single-mode fiber (SMF) at 40 Gb/s and investigate the effect of fiber nonlinearity and amplified spontaneous emission (ASE) noise on the received signal quality. Furthermore, we analytically analyze the complexity of DDPE versus CE in terms of the number of required complex multiplications per bit.
Mid-Level Planning and Control for Articulated Locomoting Systems
2017-02-12
accelerometers and gyros into each module of our snake robots. Prior work from our group has already used an extended Kalman filter (EKF) to fuse these distributed...body frame is performed as part of the measurement model at every iteration of the filter , using an SVD to identify the principle components of the...addi- tion the conventional EKF, although we found that all three methods worked equally well. All three filters used the same process and measurement
[Gender reassignment surgery in transsexualism from a urological perspective].
Althaus, Peter
2006-01-01
The surgical treatment of transsexual patients can barely be called satisfactory. Poor quality surgical operations cause the life of the patients so treated to become unhappy. Transsexual surgery should only be performed in centres where a sufficient amount of experience has been gathered, and--what is equally important--an understanding amounting to affinity exists with the problem of transsexualism. There is a great need for better treatment methods. The present situation is far from being optimal.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Rodríguez-Ramilo, S. T.; Morán, P.; Caballero, A.
2006-01-01
Equalization of parental contributions is one of the most simple and widely recognized methods to maintain genetic diversity in conservation programs, as it halves the rate of increase in inbreeding and genetic drift. It has, however, the negative side effect of implying a reduced intensity of natural selection so that deleterious genes are less efficiently removed from the population with possible negative consequences on the reproductive capacity of the individuals. Theoretical results suggest that the lower fitness resulting from equalization of family sizes relative to that for free contribution schemes is expected to be substantial only for relatively large population sizes and after many generations. We present a long-term experiment with Drosophila melanogaster, comparing the fitness performance of lines maintained with equalization of contributions (EC) and others maintained with no management (NM), allowing for free matings and contributions from parents. Two (five) replicates of size N = 100 (20) individuals of each type of line were maintained for 38 generations. As expected, EC lines retained higher gene diversity and allelic richness for four microsatellite markers and a higher heritability for sternopleural bristle number. Measures of life-history traits, such as egg-to-adult viability, mating success, and global fitness declined with generations, but no significant differences were observed between EC and NM lines. Our results, therefore, provide no evidence to suggest that equalization of family sizes entails a disadvantage on the reproductive capacity of conserved populations in comparison with no management procedures, even after long periods of captivity. PMID:16299385
Advanced Performance Hydraulic Wind Energy
NASA Technical Reports Server (NTRS)
Jones, Jack A.; Bruce, Allan; Lam, Adrienne S.
2013-01-01
The Jet Propulsion Laboratory, California Institute of Technology, has developed a novel advanced hydraulic wind energy design, which has up to 23% performance improvement over conventional wind turbine and conventional hydraulic wind energy systems with 5 m/sec winds. It also has significant cost advantages with levelized costs equal to coal (after carbon tax rebate). The design is equally applicable to tidal energy systems and has passed preliminary laboratory proof-of-performance tests, as funded by the Department of Energy.
A cost-effective method to get insight into the peritoneal dialysate effluent proteome.
Araújo, J E; Jorge, S; Teixeira E Costa, F; Ramos, A; Lodeiro, C; Santos, H M; Capelo, J L
2016-08-11
Protein depletion with acetonitrile and protein equalization with dithiothreitol have been assessed with success as proteomics tools for getting insight into the peritoneal dialysate effluent proteome. The methods proposed are cost-effective, fast and easy of handling, and they match the criteria of analytical minimalism: low sample volume and low reagent consumption. Using two-dimensional gel electrophoresis and peptide mass fingerprinting, a total of 72 unique proteins were identified. Acetonitrile depletes de PDE proteome from high-abundance proteins, such as albumin, and enriches the sample in apolipo-like proteins. Dithiothreitol equalizes the PDE proteome by diminishing the levels of albumin and enriching the extract in immunoglobulin-like proteins. The annotation per gene ontology term reveals the same biological paths being affected for patients undergoing peritoneal dialysis, namely that the largest number of proteins lost through peritoneal dialysate are extracellular proteins involved in regulation processes through binding. Renal failure is a growing problem worldwide, and particularly in Europe where the population is getting older. Up-to-date there is a focus of interest in peritoneal dialysis (PD), as it provides a better quality of life and autonomy of the patients than other renal replacement therapies such as haemodialysis. However, PD can only be used during a short period of years, as the peritoneum lost its permeability through time. Therefore to make a breakthrough in PD and consequently contribute to better healthcare system it is urgent to find a group of biomarkers of peritoneum degradation. Here we report on two cost-effective methods for protein depletion in peritoneal dialysate effluent (PDE). The use of ACN and DTT over PDE to deplete high abundant proteins or to equalize the concentration of proteins, respectively, performs well and with similar protein profiles than when the same chemicals are used in human plasma samples. ACN depletes de PDE proteome from large proteins, such as albumin, and enriches the sample in apolipoproteins. DTT equalizes the PDE proteome by diminishing the levels of large proteins such as albumin and enriching the extract in immunoglobulins. Although the number and type of proteins identified are different, the annotation per gene ontology term reveals the same biological paths being affected for patients undergoing peritoneal dialysate. Thus, the largest number of proteins lost through peritoneal dialysate belongs to the group of extracellular proteins involved in regulation processes through binding. As for the searching of biomarkers, DTT seems to be the most promising of the two methods because acts as an equalizer and it allows interrogating more proteins in the same sample.
Predicting Eight Grade Students' Equation Solving Performances via Concepts of Variable and Equality
ERIC Educational Resources Information Center
Ertekin, Erhan
2017-01-01
This study focused on how two algebraic concepts- equality and variable- predicted 8th grade students' equation solving performance. In this study, predictive design as a correlational research design was used. Randomly selected 407 eight-grade students who were from the central districts of a city in the central region of Turkey participated in…
NASA Astrophysics Data System (ADS)
Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.
Tafiadis, Dionysios; Chronopoulos, Spyridon K; Kosma, Evangelia I; Voniati, Louiza; Raptis, Vasilis; Siafaka, Vasiliki; Ziavra, Nausica
2017-07-11
Voice performance is an inextricable key factor of everyday life. Obviously, the deterioration of voice quality can cause various problems to human communication and can therefore reduce the performance of social skills (relevant to voice). The deterioration could be originated from changes inside the system of the vocal tract and larynx. Various prognostic methods exist, and among them is the Voice Handicap Index (VHI). This tool includes self-reported questionnaires, used for determining the cutoff points of total score and of its three domains relevant to young male Greek smokers. The interpretation of the calculated cutoff points can serve as a strong indicator of imminent or future evaluation by a clinician. Consistent with previous calculation, the VHI can also act as a feedback for smokers' voice condition and as monitoring procedure toward smoking cessation. Specifically, the sample consisted of 130 male nondysphonic smokers (aged 18-33 years) who all participated in the VHI test procedure. The test results (through receiver operating characteristic analysis) concluded to a total cutoff point score of 19.50 (sensitivity: 0.838, 1-specificity: 0). Also, in terms of constructs, the Functional domain was equal to 7.50 (sensitivity: 0.676, 1-specificity: 0.032), the Physical domain was equal to 7.50 (sensitivity: 0.706, 1-specificity: 0.032), and the Emotional domain was equal to 6.50 (sensitivity: 0.809, 1-specificity: 0.048). Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Davis, Matthew L; Scott Gayzik, F
2016-10-01
Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios.
PowerPoint presentation in learning physiology by undergraduates with different learning styles.
Ankad, Roopa B; Shashikala, G V; Herur, Anita; Manjula, R; Chinagudi, Surekharani; Patil, Shailaja
2015-12-01
PowerPoint presentations (PPTs) have become routine in medical colleges because of their flexible and varied presentation capabilities. Research indicates that students prefer PPTs over the chalk-and-talk method, and there is a lot of debate over advantages and disadvantages of PPTs. However, there is no clear evidence that PPTs improve student learning/performance. Furthermore, there are a variety of learning styles with sex differences in classrooms. It is the responsibility of teacher/facilitator and student to be aware of learning style preferences to improve learning. The present study asked the following research question: do PPTs equally affect the learning of students with different learning styles in a mixed sex classroom? After we assessed students' predominant learning style according to the sensory modality that one most prefers to use when learning, a test was conducted before and after a PPT to assess student performance. The results were analyzed using Student's t-test and ANOVA with a Bonferroni post hoc test. A z-test showed no sex differences in preferred learning styles. There was significant increase in posttest performance compared with that of the pretest in all types of learners of both sexes. There was also a nonsignificant relationship among sex, learning style, and performance after the PPT. A PPT is equally effective for students with different learning style preferences and supports mixed sex classrooms. Copyright © 2015 The American Physiological Society.
Semi-automation of Doppler Spectrum Image Analysis for Grading Aortic Valve Stenosis Severity.
Niakšu, O; Balčiunaitė, G; Kizlaitis, R J; Treigys, P
2016-01-01
Doppler echocardiography analysis has become a golden standard in the modern diagnosis of heart diseases. In this paper, we propose a set of techniques for semi-automated parameter extraction for aortic valve stenosis severity grading. The main objectives of the study is to create echocardiography image processing techniques, which minimize manual image processing work of clinicians and leads to reduced human error rates. Aortic valve and left ventricle output tract spectrogram images have been processed and analyzed. A novel method was developed to trace systoles and to extract diagnostic relevant features. The results of the introduced method have been compared to the findings of the participating cardiologists. The experimental results showed the accuracy of the proposed method is comparable to the manual measurement performed by medical professionals. Linear regression analysis of the calculated parameters and the measurements manually obtained by the cardiologists resulted in the strongly correlated values: peak systolic velocity's and mean pressure gradient's R2 both equal to 0.99, their means' differences equal to 0.02 m/s and 4.09 mmHg, respectively, and aortic valve area's R2 of 0.89 with the two methods means' difference of 0.19 mm. The introduced Doppler echocardiography images processing method can be used as a computer-aided assistance in the aortic valve stenosis diagnostics. In our future work, we intend to improve precision of left ventricular outflow tract spectrogram measurements and apply data mining methods to propose a clinical decision support system for diagnosing aortic valve stenosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morise, A.P.; Duval, R.D.
To determine whether recent refinements in Bayesian methods have led to improved diagnostic ability, 3 methods using Bayes' theorem and the independence assumption for estimating posttest probability after exercise stress testing were compared. Each method differed in the number of variables considered in the posttest probability estimate (method A = 5, method B = 6 and method C = 15). Method C is better known as CADENZA. There were 436 patients (250 men and 186 women) who underwent stress testing (135 had concurrent thallium scintigraphy) followed within 2 months by coronary arteriography. Coronary artery disease ((CAD), at least 1 vesselmore » with greater than or equal to 50% diameter narrowing) was seen in 169 (38%). Mean pretest probabilities using each method were not different. However, the mean posttest probabilities for CADENZA were significantly greater than those for method A or B (p less than 0.0001). Each decile of posttest probability was compared to the actual prevalence of CAD in that decile. At posttest probabilities less than or equal to 20%, there was underestimation of CAD. However, at posttest probabilities greater than or equal to 60%, there was overestimation of CAD by all methods, especially CADENZA. Comparison of sensitivity and specificity at every fifth percentile of posttest probability revealed that CADENZA was significantly more sensitive and less specific than methods A and B. Therefore, at lower probability thresholds, CADENZA was a better screening method. However, methods A or B still had merit as a means to confirm higher probabilities generated by CADENZA (especially greater than or equal to 60%).« less
Jarzynski equality in the context of maximum path entropy
NASA Astrophysics Data System (ADS)
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
ERIC Educational Resources Information Center
Fastrup, Jerry C.
2002-01-01
Uses a foundation-equalizing model to develop a number of indicators measuring the extent to which states utilize the full range of equalization tools at its disposal. Illustrates the utility of these indicators through an evaluation of the school finance reform instituted by Rhode Island between 1992 and 1996. (Contains 25 references.)…
Brief Highlights of Major Federal Laws and Order on Sex Discrimination in Employment.
ERIC Educational Resources Information Center
Employment Standards Administration (DOL), Washington, DC. Women's Bureau.
The following laws and order are explained in this pamphlet: (1) Equal Pay Act of 1963 (concerns prohibiting employers from paying workers of one sex less than workers of the other sex for equal work on jobs that require equal skill, effort, and responsibility and that are performed under similar working conditions), (2) Title VII of the Civil…
Guiomar, Fernando P; Reis, Jacklyn D; Carena, Andrea; Bosco, Gabriella; Teixeira, António L; Pinto, Armando N
2013-01-14
Employing 100G polarization-multiplexed quaternary phase-shift keying (PM-QPSK) signals, we experimentally demonstrate a dual-polarization Volterra series nonlinear equalizer (VSNE) applied in frequency-domain, to mitigate intra-channel nonlinearities. The performance of the dual-polarization VSNE is assessed in both single-channel and in wavelength-division multiplexing (WDM) scenarios, providing direct comparisons with its single-polarization version and with the widely studied back-propagation split-step Fourier (SSF) approach. In single-channel transmission, the optimum power has been increased by about 1 dB, relatively to the single-polarization equalizers, and up to 3 dB over linear equalization, with a corresponding bit error rate (BER) reduction of up to 63% and 85%, respectively. Despite of the impact of inter-channel nonlinearities, we show that intra-channel nonlinear equalization is still able to provide approximately 1 dB improvement in the optimum power and a BER reduction of ~33%, considering a 66 GHz WDM grid. By means of simulation, we demonstrate that the performance of nonlinear equalization can be substantially enhanced if both optical and electrical filtering are optimized, enabling the VSNE technique to outperform its SSF counterpart at high input powers.
Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps
NASA Astrophysics Data System (ADS)
Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong
2018-02-01
Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Tsui, B; Noo, F
Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less
Linear methods for reducing EMG contamination in peripheral nerve motor decodes.
Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J
2016-08-01
Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.
A New Nonparametric Levene Test for Equal Variances
ERIC Educational Resources Information Center
Nordstokke, David W.; Zumbo, Bruno D.
2010-01-01
Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…
Plasma lactate accumulation and distance running performance. 1979.
Farrell, P A; Wilmore, J H; Coyle, E F; Billing, J E; Costill, D L
1993-10-01
Laboratory and field assessments were made on eighteen male distance runners. Performance data were obtained for distances of 3.2, 9.7, 15, 19.3 km (n = 18) and the marathon (n = 13). Muscle fiber composition expressed as percent of slow twitch fibers (%ST), maximal oxygen consumption (VO2max), running economy (VO2 for a treadmill velocity of 268 m/min), and the VO2 and treadmill velocity corresponding to the onset of plasma lactate accumulation (OPLA) were determined for each subject. %ST (R > or equal to .47), VO2max (r > or equal to .83), running economy (r > or equal to .49), VO2 in ml/kg min corresponding to the OPLA (r > or equal to .91) and the treadmill velocity corresponding to OPLA (r > or equal to .91) were significantly (p < .05) related to performance at all distances. Multiple regression analysis showed that the treadmill velocity corresponding to the OPLA was most closely related to performance and the addition of other factors did not significantly raise the multiple R values suggesting that these other variables may interact with the purpose of keeping plasma lactates low during distance races. The slowest and fastest marathoners ran their marathons 7 and 3 m/min faster than their treadmill velocities corresponding to their OPLA which indicates that this relationship is independent of the competitive level of the runner. Runners appear to set a race pace which allows the utilization of the largest possible VO2 which just avoids the exponential rise in plasma lactate.
NASA Astrophysics Data System (ADS)
Liu, Na; Ju, Cheng
2018-02-01
Nyquist-SCM signal after fiber transmission, direct detection (DD), and analog down-conversion suffers from linear ISI, nonlinear ISI, and I/Q imbalance, simultaneously. Theoretical analysis based on widely linear (WL) and Volterra series is given to explain the relationship and interaction of these three interferences. A blind equalization algorithm, cascaded WL and Volterra equalizer, is designed to mitigate these three interferences. Furthermore, the feasibility of the proposed cascaded algorithm is experimentally demonstrated based on a 40-Gbps data rate 16-quadrature amplitude modulation (QAM) virtual single sideband (VSSB) Nyquist-SCM DD system over 100-km standard single mode fiber (SSMF) transmission. In addition, the performances of conventional strictly linear equalizer, WL equalizer, Volterra equalizer, and cascaded WL and Volterra equalizer are experimentally evaluated, respectively.
Stochastic HKMDHE: A multi-objective contrast enhancement algorithm
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2018-02-01
This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.
He, Zhixue; Li, Xiang; Luo, Ming; Hu, Rong; Li, Cai; Qiu, Ying; Fu, Songnian; Yang, Qi; Yu, Shaohua
2016-05-02
We propose and experimentally demonstrate two independent component analysis (ICA) based channel equalizers (CEs) for 6 × 6 MIMO-OFDM transmission over few-mode fiber. Compared with the conventional channel equalizer based on training symbols (TSs-CE), the proposed two ICA-based channel equalizers (ICA-CE-I and ICA-CE-II) can achieve comparable performances, while requiring much less training symbols. Consequently, the overheads for channel equalization can be substantially reduced from 13.7% to 0.4% and 2.6%, respectively. Meanwhile, we also experimentally investigate the convergence speed of the proposed ICA-based CEs.
Greensmith, David J.
2014-01-01
Here I present an Excel based program for the analysis of intracellular Ca transients recorded using fluorescent indicators. The program can perform all the necessary steps which convert recorded raw voltage changes into meaningful physiological information. The program performs two fundamental processes. (1) It can prepare the raw signal by several methods. (2) It can then be used to analyze the prepared data to provide information such as absolute intracellular Ca levels. Also, the rates of change of Ca can be measured using multiple, simultaneous regression analysis. I demonstrate that this program performs equally well as commercially available software, but has numerous advantages, namely creating a simplified, self-contained analysis workflow. PMID:24125908
Agrawal, Vijay K; Gupta, Madhu; Singh, Jyoti; Khadikar, Padmakar V
2005-03-15
Attempt is made to propose yet another method of estimating lipophilicity of a heterogeneous set of 223 compounds. The method is based on the use of equalized electronegativity along with topological indices. It was observed that excellent results are obtained in multiparametric regression upon introduction of indicator parameters. The results are discussed critically on the basis various statistical parameters.
Smartphone Text Input Method Performance, Usability, and Preference With Younger and Older Adults.
Smith, Amanda L; Chaparro, Barbara S
2015-09-01
User performance, perceived usability, and preference for five smartphone text input methods were compared with younger and older novice adults. Smartphones are used for a variety of functions other than phone calls, including text messaging, e-mail, and web browsing. Research comparing performance with methods of text input on smartphones reveals a high degree of variability in reported measures, procedures, and results. This study reports on a direct comparison of five of the most common input methods among a population of younger and older adults, who had no experience with any of the methods. Fifty adults (25 younger, 18-35 years; 25 older, 60-84 years) completed a text entry task using five text input methods (physical Qwerty, onscreen Qwerty, tracing, handwriting, and voice). Entry and error rates, perceived usability, and preference were recorded. Both age groups input text equally fast using voice input, but older adults were slower than younger adults using all other methods. Both age groups had low error rates when using physical Qwerty and voice, but older adults committed more errors with the other three methods. Both younger and older adults preferred voice and physical Qwerty input to the remaining methods. Handwriting consistently performed the worst and was rated lowest by both groups. Voice and physical Qwerty input methods proved to be the most effective for both younger and older adults, and handwriting input was the least effective overall. These findings have implications to the design of future smartphone text input methods and devices, particularly for older adults. © 2015, Human Factors and Ergonomics Society.
Doppler-shift estimation of flat underwater channel using data-aided least-square approach
NASA Astrophysics Data System (ADS)
Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing
2015-06-01
In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Method of estimating pulse response using an impedance spectrum
Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G
2014-10-21
Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.
NASA Astrophysics Data System (ADS)
Imbrogno, Stano; Segebade, Eric; Fellmeth, Andreas; Gerstenmeyer, Michael; Zanger, Frederik; Schulze, Volker; Umbrello, Domenico
2017-10-01
Recently, the study and understanding of surface integrity of various materials after machining is becoming one of the interpretative keys to quantify a product's quality and life cycle performance. The possibility to provide fundamental details about the mechanical response and the behavior of the affected material layers caused by thermo-mechanical loads resulting from machining operations can help the designer to produce parts with superior quality. The aim of this work is to study the experimental outcomes obtained from orthogonal cutting tests and a Severe Plastic Deformation (SPD) process known as Equal Channel Angular Pressing (ECAP) in order to find possible links regarding induced microstructural and hardness changes between machined surface layer and SPD-bulk material for Al-7075. This scientific investigation aims to establish the basis for an innovative method to study and quantify metallurgical phenomena that occur beneath the machined surface of bulk material.
ERIC Educational Resources Information Center
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy; Tazaz, Amanda M.
2017-01-01
The following report describes an assessment instrument called the Mathematics Performance and Cognition (MPAC) interview. The MPAC interview was designed to measure two outcomes of interest. It was designed to measure first and second graders' mathematics achievement in number, operations, and equality, and it was also designed to gather…
The Correlations between Airport Sustainability and Indonesian Economic Growth
NASA Astrophysics Data System (ADS)
Setiawan, M. I.; Dhaniarti, I.; Utomo, W. M.; Sukoco, A.; Mudjanarko, S. W.; Hasyim, C.; Prasetijo, J.; Kurniasih, N.; Wajdi, M. B. N.; Purworusmiardi, T.; Suyono, J.; Sudapet, I. N.; Nasihien, R. D.; Wulandari, D. A. R.; Ade, R. T.; Atmaja, W. M. T.; Sugeng; Wulandari, A.
2018-04-01
This study aims to analyze the correlation between airport performances with Gross domestic product-regional (GDP-regional) performance. This research uses quantitative research method with correlation study approach. Based on the T-Value Test Result, the T-value for the Airport Performance variable is 14,264. T-Value Test Results and compared with T-table equal to 1,976 (significant level 0,05) hence T-count> T-table so variable of Airport Perform predicted have significant correlation to GDP-regional. This means that good airport performance will either improve the performance of Water supply, Sewerage, Waste Management and Remediation Activities; Wholesale and Retail Trade; Repair of Motor Vehicles and Motorcycles; Accommodation and Food Service Activities; Financial and Insurance Activities; Business Activities; Public Administration and Defence; Compulsory Social Security; Education; Human Health and Social Work Activities; Other Services Activities; Manufacturing; and Electricity and Gas, better.
Bit error rate analysis of the K channel using wavelength diversity
NASA Astrophysics Data System (ADS)
Shah, Dhaval; Kothari, Dilip Kumar; Ghosh, Anjan K.
2017-05-01
The presence of atmospheric turbulence in the free space causes fading and degrades the performance of a free space optical (FSO) system. To mitigate the turbulence-induced fading, multiple copies of the signal can be transmitted on a different wavelength. Each signal, in this case, will undergo different fadings. This is known as the wavelength diversity technique. Bit error rate (BER) performance of the FSO systems with wavelength diversity under strong turbulence condition is investigated. K-distribution is chosen to model a strong turbulence scenario. The source information is transmitted onto three carrier wavelengths of 1.55, 1.31, and 0.85 μm. The signals at the receiver side are combined using three different methods: optical combining (OC), equal gain combining (EGC), and selection combining (SC). Mathematical expressions are derived for the calculation of the BER for all three schemes (OC, EGC, and SC). Results are presented for the link distance of 2 and 3 km under strong turbulence conditions for all the combining methods. The performance of all three schemes is also compared. It is observed that OC provides better performance than the other two techniques. Proposed method results are also compared with the published article.
A simple field kit for the determination of drug susceptibility in Plasmodium falciparum.
Nguyen-Dinh, P; Magloire, R; Chin, W
1983-05-01
A field kit has been developed which greatly simplifies the performance of the 48-hour in vitro test for drug resistance in Plasmodium falciparum. The kit uses an easily reconstituted lyophilized culture medium, and requires only a fingerprick blood sample. In parallel tests with 13 isolates of P. falciparum in Haiti, the new technique had a success rate equal to that of the previously described method, with comparable results in terms of parasite susceptibility in vitro to chloroquine and pyrimethamine.
2017-12-01
values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression
Sour pressure swing adsorption process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhadra, Shubhra Jyoti; Wright, Andrew David; Hufton, Jeffrey Raymond
Methods and apparatuses for separating CO.sub.2 and sulfur-containing compounds from a synthesis gas obtained from gasification of a carbonaceous feedstock. The primary separating steps are performed using a sour pressure swing adsorption (SPSA) system, followed by an acid gas enrichment system and a sulfur removal unit. The SPSA system includes multiple pressure equalization steps and a rinse step using a rinse gas that is supplied from a source other than directly from one of the adsorber beds of the SPSA system.
Balance Contrast Enhancement using piecewise linear stretching
NASA Astrophysics Data System (ADS)
Rahavan, R. V.; Govil, R. C.
1993-04-01
Balance Contrast Enhancement is one of the techniques employed to produce color composites with increased color contrast. It equalizes the three images used for color composition in range and mean. This results in a color composite with large variation in hue. Here, it is shown that piecewise linear stretching can be used for performing the Balance Contrast Enhancement. In comparison with the Balance Contrast Enhancement Technique using parabolic segment as transfer function (BCETP), the method presented here is algorithmically simple, constraint-free and produces comparable results.
Low cost damage tolerant composite fabrication
NASA Technical Reports Server (NTRS)
Palmer, R. J.; Freeman, W. T.
1988-01-01
The resin transfer molding (RTM) process applied to composite aircraft parts offers the potential for using low cost resin systems with dry graphite fabrics that can be significantly less expensive than prepreg tape fabricated components. Stitched graphite fabric composites have demonstrated compression after impact failure performance that equals or exceeds that of thermoplastic or tough thermoset matrix composites. This paper reviews methods developed to fabricate complex shape composite parts using stitched graphite fabrics to increase damage tolerance with RTM processes to reduce fabrication cost.
Orbital Tori Construction Using Trajectory Following Spectral Methods
2010-09-01
a Walker delta pattern scheme of 18/6/2. Explicitly, this means the 18 satellites were equally spaced in six planes , each inclined at 55 degrees, with...a relative phasing angle parameter of 2 [65]. The planes ’ inclinations were reduced from the original specification of 63 degrees to 55 degrees due...navigation performance specification for the SPS was ≤ 100 meters 8 in the horizontal plane , 95 percent of the time and ≤ 156 meters in the vertical plane
2007-01-01
and frequency transfer ( TWSTFT ) were performed along three transatlantic links over the 6-month period 29 January – 31 July 2006. The GPSCPFT and... TWSTFT results were subtracted in order to estimate the combined uncertainty of the methods. The frequency values obtained from GPSCPFT and TWSTFT ...values were equal to or less than the frequency-stability values σy(GPSCPFT) – y( TWSTFT ) (τ) (or TheoBR (τ)) computed for the corresponding averaging
Comparison of NRZ and duo-binary format in adaptive equalization assisted 10G-optics based 25G-EPON
NASA Astrophysics Data System (ADS)
Xia, Junqi; Li, Zhengxuan; Li, Yingchun; Xu, Tingting; Chen, Jian; Song, Yingxiong; Wang, Min
2018-03-01
We investigate and compare the requirements of FFE/DFE based adaptive equalization techniques for NRZ and Duo-binary based 25-Gb/s transmission, which are two of the most promising schemes for 25G-EPON. A 25-Gb/s transmission system based on 10G optical transceivers is demonstrated and the performance of only FFE and combination of FFE and DFE with different number of taps are compared with two modulation formats. The FFE/DFE based Duo-binary receiver shows better performance than NRZ receiver. For Duo-binary receiver, only 13-tap FFE is needed for BtB case and the combination of 17-tap FFE and 5-tap DFE can achieve a sensitivity of -23.45 dBm in 25 km transmission case, which is ∼0.6 dB better than the best performance of NRZ equalization. In addition, the requirements of training sequence length for FFE/DFE based adaptive equalization is verified. Experimental results show that 400 symbols training length is optimal for the two modulations, which shows a small packet preamble in up-stream burst-mode transmission.
A semi-micromethod for determination of oxalate in human plasma.
Porowski, Tadeusz; Gałasiński, Władysław
2003-01-01
An enzymatic semi-micromethod for oxalate determination in human plasma was elaborated. The principle of the method depends on the oxalate isolation from deproteinized plasma, following determination by the calorimetric oxalate oxidase-peroxidase-indamine system. This method protects against internal oxalate losses and excludes an interference of contaminations. Results, obtained by this method, were reliable and ideally suited for use as real normal values (less than or equal to 6 microM) of oxalate content in the plasma of healthy individuals. The elaborated method, which can assay plasma oxalate accurately in normal individuals as well as in hyperoxalemic conditions is superior to those previously used. The procedure of semi-micromethod does not require expensive equipments and apparatus: it is simple and easy to perform in every laboratory and takes little time.
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
Wu, Stephen; Miller, Timothy; Masanz, James; Coarr, Matt; Halgrim, Scott; Carrell, David; Clark, Cheryl
2014-01-01
A review of published work in clinical natural language processing (NLP) may suggest that the negation detection task has been “solved.” This work proposes that an optimizable solution does not equal a generalizable solution. We introduce a new machine learning-based Polarity Module for detecting negation in clinical text, and extensively compare its performance across domains. Using four manually annotated corpora of clinical text, we show that negation detection performance suffers when there is no in-domain development (for manual methods) or training data (for machine learning-based methods). Various factors (e.g., annotation guidelines, named entity characteristics, the amount of data, and lexical and syntactic context) play a role in making generalizability difficult, but none completely explains the phenomenon. Furthermore, generalizability remains challenging because it is unclear whether to use a single source for accurate data, combine all sources into a single model, or apply domain adaptation methods. The most reliable means to improve negation detection is to manually annotate in-domain training data (or, perhaps, manually modify rules); this is a strategy for optimizing performance, rather than generalizing it. These results suggest a direction for future work in domain-adaptive and task-adaptive methods for clinical NLP. PMID:25393544
Modulation format identification aided hitless flexible coherent transceiver.
Xiang, Meng; Zhuge, Qunbi; Qiu, Meng; Zhou, Xingyu; Zhang, Fangyuan; Tang, Ming; Liu, Deming; Fu, Songnian; Plant, David V
2016-07-11
We propose a hitless flexible coherent transceiver enabled by a novel modulation format identification (MFI) scheme for dynamic agile optical networks. The modulation format transparent digital signal processing (DSP) is realized by a block-wise decision-directed least-mean-square (DD-LMS) equalizer for channel tracking, and a pilot symbol aided superscalar phase locked loop (PLL) for carrier phase estimation (CPE). For the MFI, the modulation format information is encoded onto the pilot symbols initially used for CPE. Therefore, the proposed MFI method does not require extra overhead. Moreover, it can identify arbitrary modulation formats including multi-dimensional formats, and it enables tracking of the format change for short data blocks. The performance of the proposed hitless flexible coherent transceiver is successfully evaluated with five modulation formats including QPSK, 16QAM, 64QAM, Hybrid QPSK/8QAM and set-partitioning (SP)-512-QAM. We show that the proposed MFI method induces a negligible performance penalty. Moreover, we experimentally demonstrate that such a hitless transceiver can adapt to fast block-by-block modulation format switching. Finally, the performance improvement of the proposed MFI method is experimentally verified with respect to other commonly used MFI methods.
A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images
Luo, Yaozhong; Liu, Longzhong; Li, Xuelong
2017-01-01
Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703
Monge, Paul
2006-01-01
Activity-based methods serve as a dynamic process that has allowed many other industries to reduce and control their costs, increase productivity, and streamline their processes while improving product quality and service. The method could serve the healthcare industry in an equally beneficial way. Activity-based methods encompass both activity based costing (ABC) and activity-based management (ABM). ABC is a cost management approach that links resource consumption to activities that an enterprise performs, and then assigns those activities and their associated costs to customers, products, or product lines. ABM uses the resource assignments derived in ABC so that operation managers can improve their departmental processes and workflows. There are three fundamental problems with traditional cost systems. First, traditional systems fail to reflect the underlying diversity of work taking place within an enterprise. Second, it uses allocations that are, for the most part, arbitrary Single step allocations fail to reflect the real work-the activities being performed and the associate resources actually consumed. Third, they only provide a cost number that, standing alone, does not provide any guidance on how to improve performance by lowering cost or enhancing throughput.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua
2018-01-01
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797
Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua
2018-05-06
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.
ERIC Educational Resources Information Center
Driver, Melissa K.; Powell, Sarah R.
2015-01-01
Students often experience difficulty with attaching meaning to mathematics symbols. Many students react to symbols, such as the equal sign, as a command to "do something" or "write an answer" without reflecting upon the proper relational meaning of the equal sign. One method for assessing equal-sign understanding is through…
[Efficiency of combined methods of hemorroid treatment using hal-rar and laser destruction].
Rodoman, G V; Kornev, L V; Shalaeva, T I; Malushenko, R N
2017-01-01
To develop the combined method of treatment of hemorrhoids with arterial ligation under Doppler control and laser destruction of internal and external hemorrhoids. The study included 100 patients with chronic hemorrhoids stage II and III. Combined method of HAL-laser was used in study group, HAL RAR-technique in control group 1 and closed hemorrhoidectomy with linear stapler in control group 2. Сomparative evaluation of results in both groups was performed. Combined method overcomes the drawbacks of traditional surgical treatment and limitations in external components elimination which are inherent for HAL-RAR. Moreover, it has a higher efficiency in treating of hemorrhoids stage II-III compared with HAL-RAR and is equally safe and well tolerable for patients. This method does not increase the risk of recurrence, reduces incidence of complications and time of disability.
Zhang, Junwen; Yu, Jianjun; Chi, Nan; Chien, Hung-Chang
2014-08-25
We theoretically and experimentally investigate a time-domain digital pre-equalization (DPEQ) scheme for bandwidth-limited optical coherent communication systems, which is based on feedback of channel characteristics from the receiver-side blind and adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi- modulus algorithms (CMA, MMA). Based on the proposed DPEQ scheme, we theoretically and experimentally study its performance in terms of various channel conditions as well as resolutions for channel estimation, such as filtering bandwidth, taps length, and OSNR. Using a high speed 64-GSa/s DAC in cooperation with the proposed DPEQ technique, we successfully synthesized band-limited 40-Gbaud signals in modulation formats of polarization-diversion multiplexed (PDM) quadrature phase shift keying (QPSK), 8-quadrature amplitude modulation (QAM) and 16-QAM, and significant improvement in both back-to-back and transmission BER performances are also demonstrated.
Appropriate Statistical Analysis for Two Independent Groups of Likert-Type Data
ERIC Educational Resources Information Center
Warachan, Boonyasit
2011-01-01
The objective of this research was to determine the robustness and statistical power of three different methods for testing the hypothesis that ordinal samples of five and seven Likert categories come from equal populations. The three methods are the two sample t-test with equal variances, the Mann-Whitney test, and the Kolmogorov-Smirnov test. In…
Application of capability indices and control charts in the analytical method control strategy.
Oliva, Alexis; Llabres Martinez, Matías
2017-08-01
In this study, we assessed the usefulness of control charts in combination with the process capability indices, C pm and C pk , in the control strategy of an analytical method. The traditional X-chart and moving range chart were used to monitor the analytical method over a 2-year period. The results confirmed that the analytical method is in-control and stable. Different criteria were used to establish the specifications limits (i.e. analyst requirements) for fixed method performance (i.e. method requirements). If the specification limits and control limits are equal in breadth, the method can be considered "capable" (C pm = 1), but it does not satisfy the minimum method capability requirements proposed by Pearn and Shu (2003). Similar results were obtained using the C pk index. The method capability was also assessed as a function of method performance for fixed analyst requirements. The results indicate that the method does not meet the requirements of the analytical target approach. A real-example data of a SEC with light-scattering detection method was used as a model whereas previously published data were used to illustrate the applicability of the proposed approach. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Virovets, O A; Gapparov, M M
1998-01-01
With use of a new method, based on detection in blood serum of radioactivity of water, formed from tritium marked precursors--glucose, amino acids (valine, serine, histidine) and palmitine acid--their distribution on oxidizing and anabolic ways of metabolism was determined. The work was carried out on laboratory rats. In young pubertal rats the ratio of flows on these ways for glucose was found equal 2.83, i.e. it in a greater degree was used as energy substratum. On the contrary, for palmitine acid this ratio was equal 0.10--it was comprised in a plastic material of organism in a greater degree. For serine, histidine and valine it is equal 0.34, 0.71 and 0.46, accordingly. In growing rats the distribution of flows was shifted aside of anabolic way: the ratio of flows is equal 0.19; in old rats--aside of oxidizing: a ratio of flows is equal 0.71.
A novel parallel architecture for local histogram equalization
NASA Astrophysics Data System (ADS)
Ohannessian, Mesrob I.; Choueiter, Ghinwa F.; Diab, Hassan
2005-07-01
Local histogram equalization is an image enhancement algorithm that has found wide application in the pre-processing stage of areas such as computer vision, pattern recognition and medical imaging. The computationally intensive nature of the procedure, however, is a main limitation when real time interactive applications are in question. This work explores the possibility of performing parallel local histogram equalization, using an array of special purpose elementary processors, through an HDL implementation that targets FPGA or ASIC platforms. A novel parallelization scheme is presented and the corresponding architecture is derived. The algorithm is reduced to pixel-level operations. Processing elements are assigned image blocks, to maintain a reasonable performance-cost ratio. To further simplify both processor and memory organizations, a bit-serial access scheme is used. A brief performance assessment is provided to illustrate and quantify the merit of the approach.
An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization
2012-08-17
the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2
Multimodal biometric method that combines veins, prints, and shape of a finger
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo
2011-01-01
Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.
Multiratio fusion change detection with adaptive thresholding
NASA Astrophysics Data System (ADS)
Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.
2017-04-01
A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.
Hybrid time-frequency domain equalization for LED nonlinearity mitigation in OFDM-based VLC systems.
Li, Jianfeng; Huang, Zhitong; Liu, Xiaoshuang; Ji, Yuefeng
2015-01-12
A novel hybrid time-frequency domain equalization scheme is proposed and experimentally demonstrated to mitigate the white light emitting diode (LED) nonlinearity in visible light communication (VLC) systems based on orthogonal frequency division multiplexing (OFDM). We handle the linear and nonlinear distortion separately in a nonlinear OFDM system. The linear part is equalized in frequency domain and the nonlinear part is compensated by an adaptive nonlinear time domain equalizer (N-TDE). The experimental results show that with only a small number of parameters the nonlinear equalizer can efficiently mitigate the LED nonlinearity. With the N-TDE the modulation index (MI) and BER performance can be significantly enhanced.
Powell, Sarah R; Fuchs, Lynn S
2010-05-01
Elementary school students often misinterpret the equal sign (=) as an operational rather than a relational symbol. Such misunderstanding is problematic because solving equations with missing numbers may be important for higher-order mathematics skills including word problems. Research indicates equal-sign instruction can alter how typically-developing students use the equal sign, but no study has examined effects for students with mathematics difficulty (MD) or how equal-sign instruction contributes to word-problem skill for students with or without MD. The present study assessed the efficacy of equal-sign instruction within word-problem tutoring. Third-grade students with MD (n = 80) were assigned to word-problem tutoring, word-problem tutoring plus equal-sign instruction (combined) tutoring, or no-tutoring control. Combined tutoring produced better improvement on equal sign tasks and open equations compared to the other 2 conditions. On certain forms of word problems, combined tutoring but not word-problem tutoring alone produced better improvement than control. When compared at posttest to 3(rd)-grade students without MD on equal sign tasks and open equations, only combined tutoring students with MD performed comparably.
Comparison of different methods used in integral codes to model coagulation of aerosols
NASA Astrophysics Data System (ADS)
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Job-mix modeling and system analysis of an aerospace multiprocessor.
NASA Technical Reports Server (NTRS)
Mallach, E. G.
1972-01-01
An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.
Biometric recognition via texture features of eye movement trajectories in a visual searching task.
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.
Biometric recognition via texture features of eye movement trajectories in a visual searching task
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383
A correction method of the anode wire modulation for 2D MWPCs detector
NASA Astrophysics Data System (ADS)
Wen, Z. W.; Qi, H. R.; Zhang, Y. L.; Wang, H. Y.; Liu, L.; Li, Y. H.
2018-04-01
The linearity performance of 2D Multi-Wire Proportional Chambers (MWPCs) detector across the anode wires is modulated by the discrete anode wires. A MWPCs dectector with the 2 mm anode wire spacing was developed to study the anode wire modulation effect. The 2D lineartity performance was measured with a 55Fe source which was moved by a electric mobile platform. The experimental results show that the deviation of the measured position depends upon the incident position in the axis across the anode wires and the curve between the measured position and the incident position is consistent with the sine function whose period is equal to the anode wire spacing. A correction method of the measured position across the anode wire direction was obtained by fitting the curve between the measured position and the incident position. The non-linearity of the measured position across the anode wire direction is reduced about 0.085% and the imaging capability is obviously improved after the data is modified by the correction method.
Baek, Soo Kyoung; Lee, Seung Seok; Park, Eun Jeon; Sohn, Dong Hwan; Lee, Hye Suk
2003-02-05
A rapid and sensitive column-switching semi-micro high-performance liquid chromatography method was developed for the direct analysis of tiropramide in human plasma. The plasma sample (100 microl) was directly injected onto Capcell Pak MF Ph-1 precolumn where deproteinization and analyte fractionation occurred. Tiropramide was then eluted into an enrichment column (Capcell Pak UG C(18)) using acetonitrile-potassium phosphate (pH 7.0, 50 mM) (12:88, v/v) and was analyzed on a semi-micro C(18) analytical column using acetonitrile-potassium phosphate (pH 7.0, 10 mM) (50:50, v/v). The method showed excellent sensitivity (limit of quantification 5 ng/ml), and good precision (C.V.
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
NASA Astrophysics Data System (ADS)
Jarkeh, Mohammad Reza; Mianabadi, Ameneh; Mianabadi, Hojjat
2016-10-01
Mismanagement and uneven distribution of water may lead to or increase conflict among countries. Allocation of water among trans-boundary river neighbours is a key issue in utilization of shared water resources. The bankruptcy theory is a cooperative Game Theory method which is used when the amount of demand of riparian states is larger than total available water. In this study, we survey the application of seven methods of Classical Bankruptcy Rules (CBRs) including Proportional (CBR-PRO), Adjusted Proportional (CBR-AP), Constrained Equal Awards (CBR-CEA), Constrained Equal Losses (CBR-CEL), Piniles (CBR-Piniles), Minimal Overlap (CBR-MO), Talmud (CBR-Talmud) and four Sequential Sharing Rules (SSRs) including Proportional (SSR-PRO), Constrained Equal Awards (SSR-CEA), Constrained Equal Losses (SSR-CEL) and Talmud (SSR-Talmud) methods in allocation of the Euphrates River among three riparian countries: Turkey, Syria and Iraq. However, there is not a certain documented method to find more equitable allocation rule. Therefore, in this paper, a new method is established for choosing the most appropriate allocating rule which seems to be more equitable than other allocation rules to satisfy the stakeholders. The results reveal that, based on the new propose model, the CBR-AP seems to be more equitable to allocate the Euphrates River water among Turkey, Syria and Iraq.
The gender gap in sport performance: equity influences equality.
Capranica, Laura; Piacentini, Maria Francesca; Halson, Shona; Myburgh, Kathryn H; Ogasawara, Etsuko; Millard-Stafford, Mindy
2013-01-01
Sport is recognized as playing a relevant societal role to promote education, health, intercultural dialogue, and the individual development, regardless of an individual's gender, race, age, ability, religion, political affiliation, sexual orientation, and socioeconomic background. Yet, it was not until the 2012 Summer Olympic Games in London that every country's delegation included a female competitor. The gender gap in sport, although closing, remains, due to biological differences affecting performance, but it is also influenced by reduced opportunity and sociopolitical factors that influence full female participation across a range of sports around the world. Until the cultural environment is equitable, scientific discussion related to physiological differences using methods that examine progression in male and female world-record performances is limited. This commentary is intended to provide a forum to discuss issues underlying gender differences in sport performance from a global perspective and acknowledge the influence of cultural and sociopolitical factors that continue to ultimately affect female performance.
Li, Kangning; Ma, Jing; Tan, Liying; Yu, Siyuan; Zhai, Chao
2016-06-10
The performances of fiber-based free-space optical (FSO) communications over gamma-gamma distributed turbulence are studied for multiple aperture receiver systems. The equal gain combining (EGC) technique is considered as a practical scheme to mitigate the atmospheric turbulence. Bit error rate (BER) performances for binary-phase-shift-keying-modulated coherent detection fiber-based free-space optical communications are derived and analyzed for EGC diversity receptions through an approximation method. To show the net diversity gain of a multiple aperture receiver system, BER performances of EGC are compared with a single monolithic aperture receiver system with the same total aperture area (same average total incident optical power on the aperture surface) for fiber-based free-space optical communications. The analytical results are verified by Monte Carlo simulations. System performances are also compared for EGC diversity coherent FSO communications with or without considering fiber-coupling efficiencies.
Multiple point least squares equalization in a room
NASA Technical Reports Server (NTRS)
Elliott, S. J.; Nelson, P. A.
1988-01-01
Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.
Kruskal, Jonathan B; Reedy, Allen; Pascal, Laurie; Rosen, Max P; Boiselle, Phillip M
2012-01-01
Many hospital radiology departments are adopting "lean" methods developed in automobile manufacturing to improve operational efficiency, eliminate waste, and optimize the value of their services. The lean approach, which emphasizes process analysis, has particular relevance to radiology departments, which depend on a smooth flow of patients and uninterrupted equipment function for efficient operation. However, the application of lean methods to isolated problems is not likely to improve overall efficiency or to produce a sustained improvement. Instead, the authors recommend a gradual but continuous and comprehensive "lean transformation" of work philosophy and workplace culture. Fundamental principles that must consistently be put into action to achieve such a transformation include equal involvement of and equal respect for all staff members, elimination of waste, standardization of work processes, improvement of flow in all processes, use of visual cues to communicate and inform, and use of specific tools to perform targeted data collection and analysis and to implement and guide change. Many categories of lean tools are available to facilitate these tasks: value stream mapping for visualizing the current state of a process and identifying activities that add no value; root cause analysis for determining the fundamental cause of a problem; team charters for planning, guiding, and communicating about change in a specific process; management dashboards for monitoring real-time developments; and a balanced scorecard for strategic oversight and planning in the areas of finance, customer service, internal operations, and staff development. © RSNA, 2012.
Nocerino, Elisabetta; Mason, Peter J.; Schwahn, Denise J.; Hetzel, Scott; Turnquist, Alyssa M.; Lee, Fred T.; Brace, Christopher L.
2017-01-01
Purpose To determine how close to the heart pulmonary microwave ablation can be performed without causing cardiac tissue injury or significant arrhythmia. Materials and Methods The study was performed with approval from the institutional animal care and use committee. Computed tomographic fluoroscopically guided microwave ablation of the lung was performed in 12 swine. Antennas were randomized to either parallel (180° ± 20°) or perpendicular (90° ± 20°) orientation relative to the heart surface and to distances of 0–10 mm from the heart. Ablations were performed at 65 W for 5 minutes or until a significant arrhythmia (asystole, heart block, bradycardia, supraventricular or ventricular tachycardia) developed. Heart tissue was evaluated with vital staining and histologic examination. Data were analyzed with mixed effects logistic regression, receiver operating characteristic curves, and the Fisher exact test. Results Thirty-four pulmonary microwave ablations were performed with the antenna a median distance of 4 mm from the heart in both perpendicular (n = 17) and parallel (n = 17) orientation. Significant arrhythmias developed during six (18%) ablations. Cardiac tissue injury occurred with 17 ablations (50%). Risk of arrhythmia and tissue injury decreased with increasing antenna distance from the heart with both antenna orientations. No cardiac complication occurred with a distance of greater than or equal to 4.4 mm from the heart. The ablation zone extended to the pleural surface adjacent to the heart in 71% of parallel and 17% of perpendicular ablations performed 5–10 mm from the heart. Conclusion Microwave lung ablations performed more than or equal to 5 mm from the heart were associated with a low risk of cardiac complications. © RSNA, 2016 PMID:27732159
24 CFR 891.600 - Responsibilities of Borrower.
Code of Federal Regulations, 2011 CFR
2011-04-01
... fair housing marketing plan and all Federal, State, or local fair housing and equal opportunity... regardless of discriminatory considerations, such as their race, color, creed, religion, familial status... of capital items). All functions must be performed in compliance with equal opportunity requirements...
24 CFR 891.600 - Responsibilities of Borrower.
Code of Federal Regulations, 2014 CFR
2014-04-01
... fair housing marketing plan and all Federal, State, or local fair housing and equal opportunity... regardless of discriminatory considerations, such as their race, color, creed, religion, familial status... of capital items). All functions must be performed in compliance with equal opportunity requirements...
28 CFR 42.206 - Compliance reviews.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Judicial Administration DEPARTMENT OF JUSTICE NONDISCRIMINATION; EQUAL EMPLOYMENT OPPORTUNITY; POLICIES AND... possibility of discrimination in the services to be performed under the grant, or in the employment practices... recipients which appear to have the most serious equal employment opportunity problems, or the greatest...
The effect of teaching medical ethics on medical students' moral reasoning.
Self, D J; Wolinsky, F D; Baldwin, D C
1989-12-01
A study assessed the effect of incorporating medical ethics into the medical curriculum and the relative effects of two methods of implementing that curriculum, namely, lecture and case-study discussions. Results indicate a statistically significant increase (p less than or equal to .0001) in the level of moral reasoning of students exposed to the medical ethics course, regardless of format. Moreover, the unadjusted posttest scores indicated that the case-study method was significantly (p less than or equal to .03) more effective than the lecture method in increasing students' level of moral reasoning. When adjustment were made for the pretest scores, however, this difference was not statistically significant (p less than or equal to .18). Regression analysis by linear panel techniques revealed that age, gender, undergraduate grade-point average, and scores on the Medical College Admission Test were not related to the changes in moral-reasoning scores. All of the variance that could be explained was due to the students' being in one of the two experimental groups. In comparison with the control group, the change associated with each experimental format was statistically significant (lecture, p less than or equal to .004; case study, p less than or equal to .0001). Various explanations for these findings and their implications are given.
Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro
2010-03-01
We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.
Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro
2007-05-01
Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent's non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent's method is not suitable for ROKU.
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-16
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.
Analysis of the faster-than-Nyquist optimal linear multicarrier system
NASA Astrophysics Data System (ADS)
Marquet, Alexandre; Siclet, Cyrille; Roque, Damien
2017-02-01
Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"
Lippa, Richard A; Collaer, Marcia L; Peters, Michael
2010-08-01
Mental rotation and line angle judgment performance were assessed in more than 90,000 women and 111,000 men from 53 nations. In all nations, men's mean performance exceeded women's on these two visuospatial tasks. Gender equality (as assessed by United Nations indices) and economic development (as assessed by per capita income and life expectancy) were significantly associated, across nations, with larger sex differences, contrary to the predictions of social role theory. For both men and women, across nations, gender equality and economic development were significantly associated with better performance on the two visuospatial tasks. However, these associations were stronger for the mental rotation task than for the line angle judgment task, and they were stronger for men than for women. Results were discussed in terms of evolutionary, social role, and stereotype threat theories of sex differences.
Incomplete fuzzy data processing systems using artificial neural network
NASA Technical Reports Server (NTRS)
Patyra, Marek J.
1992-01-01
In this paper, the implementation of a fuzzy data processing system using an artificial neural network (ANN) is discussed. The binary representation of fuzzy data is assumed, where the universe of discourse is decartelized into n equal intervals. The value of a membership function is represented by a binary number. It is proposed that incomplete fuzzy data processing be performed in two stages. The first stage performs the 'retrieval' of incomplete fuzzy data, and the second stage performs the desired operation on the retrieval data. The method of incomplete fuzzy data retrieval is proposed based on the linear approximation of missing values of the membership function. The ANN implementation of the proposed system is presented. The system was computationally verified and showed a relatively small total error.
Greensmith, David J
2014-01-01
Here I present an Excel based program for the analysis of intracellular Ca transients recorded using fluorescent indicators. The program can perform all the necessary steps which convert recorded raw voltage changes into meaningful physiological information. The program performs two fundamental processes. (1) It can prepare the raw signal by several methods. (2) It can then be used to analyze the prepared data to provide information such as absolute intracellular Ca levels. Also, the rates of change of Ca can be measured using multiple, simultaneous regression analysis. I demonstrate that this program performs equally well as commercially available software, but has numerous advantages, namely creating a simplified, self-contained analysis workflow. Copyright © 2013 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.
The effect of grain size on aluminum anodes for Al-air batteries in alkaline electrolytes
NASA Astrophysics Data System (ADS)
Fan, Liang; Lu, Huimin
2015-06-01
Aluminum is an ideal material for metallic fuel cells. In this research, different grain sizes of aluminum anodes are prepared by equal channel angular pressing (ECAP) at room temperature. Microstructure of the anodes is examined by electron backscatter diffraction (EBSD) in scanning electron microscope (SEM). Hydrogen corrosion rates of the Al anodes in 4 mol L-1 NaOH are determined by hydrogen collection method. The electrochemical properties of the aluminum anodes are investigated in the same electrolyte using electrochemical impedance spectroscopy (EIS) and polarization curves. Battery performance is also tested by constant current discharge at different current densities. Results confirm that the electrochemical properties of the aluminum anodes are related to grain size. Finer grain size anode restrains hydrogen evolution, improves electrochemical activity and increases anodic utilization rate. The proposed method is shown to effectively improve the performance of Al-air batteries.
Information-Adaptive Image Encoding and Restoration
NASA Technical Reports Server (NTRS)
Park, Stephen K.; Rahman, Zia-ur
1998-01-01
The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.
A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Woodell, Glenn A.; Jobson, Daniel J.
1997-01-01
The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well on the test set.
Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery
2013-01-01
Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4.
Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery
2013-01-01
Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4. PMID:23525188
NASA Astrophysics Data System (ADS)
Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi
2018-02-01
In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
2012-01-01
Introduction Mental ill-health among children and young adults is a growing public health problem and research into causes involves consideration of family life and gender practice. This study aimed at exploring the association between parents' degree of gender equality in childcare and children's mental ill-health. Methods The population consisted of Swedish parents and their firstborn child in 1988-1989 (N = 118 595 family units) and the statistical method was multiple logistic regression. Gender equality of childcare was indicated by the division of parental leave (1988-1990), and child mental ill-health was indicated by outpatient mental care (2001-2006) and drug prescription (2005-2008), for anxiety and depression. Results The overall finding was that boys with gender traditional parents (mother dominance in childcare) have lower risk of depression measured by outpatient mental care than boys with gender-equal parents, while girls with gender traditional and gender untraditional parents (father dominance in childcare) have lower risk of anxiety measured by drug prescription than girls with gender-equal parents. Conclusions This study suggests that unequal parenting regarding early childcare, whether traditional or untraditional, is more beneficial for offspring's mental health than equal parenting. However, further research is required to confirm our findings and to explore the pathways through which increased gender equality may influence child health. PMID:22463683
New spatial diversity equalizer based on PLL
NASA Astrophysics Data System (ADS)
Rao, Wei
2011-10-01
A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.
A practical radial basis function equalizer.
Lee, J; Beach, C; Tepedelenlioglu, N
1999-01-01
A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.
Kinoform design with an optimal-rotation-angle method.
Bengtsson, J
1994-10-10
Kinoforms (i.e., computer-generated phase holograms) are designed with a new algorithm, the optimalrotation- angle method, in the paraxial domain. This is a direct Fourier method (i.e., no inverse transform is performed) in which the height of the kinoform relief in each discrete point is chosen so that the diffraction efficiency is increased. The optimal-rotation-angle algorithm has a straightforward geometrical interpretation. It yields excellent results close to, or better than, those obtained with other state-of-the-art methods. The optimal-rotation-angle algorithm can easily be modified to take different restraints into account; as an example, phase-swing-restricted kinoforms, which distribute the light into a number of equally bright spots (so called fan-outs), were designed. The phase-swing restriction lowers the efficiency, but the uniformity can still be made almost perfect.
Sorting Rotating Micromachines by Variations in Their Magnetic Properties
NASA Astrophysics Data System (ADS)
Howell, Taylor A.; Osting, Braxton; Abbott, Jake J.
2018-05-01
We consider sorting for the broad class of micromachines (also known as microswimmers, microrobots, micropropellers, etc.) propelled by rotating magnetic fields. We present a control policy that capitalizes on the variation in magnetic properties between otherwise-homogeneous micromachines to enable the sorting of a select fraction of a group from the remainder and prescribe its net relative movement, using a uniform magnetic field that is applied equally to all micromachines. The method enables us to accomplish this sorting task using open-loop control, without relying on a structured environment or localization information of individual micromachines. With our method, the control time to perform the sort is invariant to the number of micromachines. The method is verified through simulations and scaled experiments. Finally, we include an extended discussion about the limitations of the method and address open questions related to its practical application.
Assessment of Sensorimotor Abilities of Severely Retarded Children and Adolescents.
ERIC Educational Resources Information Center
Hupp, Susan C.; And Others
1984-01-01
An investigation of the order of acquisition of domains by severely retarded children and adolescents indicated that object permanence performance always equaled or exceeded means-ends, which in turn always equaled or exceeded causality for 23 of 25 subjects. (Author/CL)
Code of Federal Regulations, 2010 CFR
2010-07-01
... opposite sex for equal work on jobs the performance of which requires equal skill, effort, and... Secretary of Labor NONDISCRIMINATION ON THE BASIS OF SEX IN EDUCATION PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Discrimination on the Basis of Sex in Employment in Education Programs or...
22 CFR 229.515 - Compensation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort... Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NONDISCRIMINATION ON THE BASIS OF SEX IN EDUCATION PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Discrimination on the Basis of Sex in Employment...
10 CFR 1042.515 - Compensation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort... OF ENERGY (GENERAL PROVISIONS) NONDISCRIMINATION ON THE BASIS OF SEX IN EDUCATION PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Discrimination on the Basis of Sex in Employment in...
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
NASA Astrophysics Data System (ADS)
Takanashi, Masaki; Nishimura, Toshihiko; Ogawa, Yasutaka; Ohgane, Takeo
Ultrawide-band impulse radio (UWB-IR) technology and multiple-input multiple-output (MIMO) systems have attracted interest regarding their use in next-generation high-speed radio communication. We have studied the use of MIMO ultrawide-band (MIMO-UWB) systems to enable higher-speed radio communication. We used frequency-domain equalization based on the minimum mean square error criterion (MMSE-FDE) to reduce intersymbol interference (ISI) and co-channel interference (CCI) in MIMO-UWB systems. Because UWB systems are expected to be used for short-range wireless communication, MIMO-UWB systems will usually operate in line-of-sight (LOS) environments and direct waves will be received at the receiver side. Direct waves have high power and cause high correlations between antennas in such environments. Thus, it is thought that direct waves will adversely affect the performance of spatial filtering and equalization techniques used to enhance signal detection. To examine the feasibility of MIMO-UWB systems, we conducted MIMO-UWB system propagation measurements in LOS environments. From the measurements, we found that the arrival time of direct waves from different transmitting antennas depends on the MIMO configuration. Because we can obtain high power from the direct waves, direct wave reception is critical for maximizing transmission performance. In this paper, we present our measurement results, and propose a way to improve performance using a method of transmit (Tx) and receive (Rx) timing control. We evaluate the bit error rate (BER) performance for this form of timing control using measured channel data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, J.R.; Ahrens, J.S.; Lowe, D.L.
Throughout the years, Sandia National Laboratories (SNL) has performed various laboratory evaluations of entry control devices, including biometric identity verifiers. The reports which resulted from this testing have been very well received by the physical security community. This same community now requires equally informative field study data. To meet this need we have conducted a field study in an effort to develop the tools and methods which our customers can use to translate laboratory data into operational field performance. The field testing described in this report was based on the Recognition Systems Inc.`s (RSI) model ID3D HandKey biometric verifier. Thismore » device was selected because it is referenced in DOE documents such as the Guide for Implementation of the DOE Standard Badge and is the de facto biometric standard for the DOE. The ID3D HandKey is currently being used at several DOE sites such as Hanford, Rocky Flats, Pantex, Savannah River, and Idaho Nuclear Engineering Laboratory. The ID3D HandKey was laboratory tested at SNL. It performed very well during this test, exhibiting an equal error point of 0.2 percent. The goals of the field test were to identify operational characteristics and design guidelines to help system engineers translate laboratory data into field performance. A secondary goal was to develop tools which could be used by others to evaluate system effectiveness or improve the performance of their systems. Operational characteristics were determined by installing a working system and studying its operation over a five month period. Throughout this test we developed tools which could be used by others to similarly gauge system effectiveness.« less
Technical Note: Gray tracking in medical color displays-A report of Task Group 196.
Badano, Aldo; Wang, Joel; Boynton, Paul; Le Callet, Patrick; Cheng, Wei-Chung; Deroo, Danny; Flynn, Michael J; Matsui, Takashi; Penczek, John; Revie, Craig; Samei, Ehsan; Steven, Peter M; Swiderski, Stan; Van Hoey, Gert; Yamaguchi, Matsuhiro; Hasegawa, Mikio; Nagy, Balázs Vince
2016-07-01
The authors discuss measurement methods and instrumentation useful for the characterization of the gray tracking performance of medical color monitors for diagnostic applications. The authors define gray tracking as the variability in the chromaticity of the gray levels in a color monitor. The authors present data regarding the capability of color measurement instruments with respect to their abilities to measure a target white point corresponding to the CIE Standard Illuminant D65 at different luminance values within the grayscale palette of a medical display. The authors then discuss evidence of significant differences in performance among color measurement instruments currently available for medical physicists to perform calibrations and image quality checks for the consistent representation of color in medical displays. In addition, the authors introduce two metrics for quantifying grayscale chromaticity consistency of gray tracking. The authors' findings show that there is an order of magnitude difference in the accuracy of field and reference instruments. The gray tracking metrics quantify how close the grayscale chromaticity is to the chromaticity of the full white point (equal amounts of red, green, and blue at maximum level) or to consecutive levels (equal values for red, green, and blue), with a lower value representing an improved grayscale tracking performance. An illustrative example of how to calculate and report the gray tracking performance according to the Task Group definitions is provided. The authors' proposed methodology for characterizing the grayscale degradation in chromaticity for color monitors that can be used to establish standards and procedures aiding in the quality control testing of color displays and color measurement instrumentation.
Proceedings of the Ship Production Symposium Held in Williamsburg, Virginia on November 1-4, 1993
1993-11-01
June 17, 1993. ‘FORAN V30, The Way to CIM from Conceptual Design in Shipbuilding,’ Senermar, Sener Sistemas Marines, S. A., Madrid, Spain. Welsh, M., J...by Crews and Hardrath (7) in the “ Companion Specimen Method - Equal Deformation Equal Life Concept.” The main assumption in their method is that the... Companion Specimen Method,” Experimental Mechanics, V. 23, pp. 313-320, 1966. ASTM E606-80, “Standard Recommended Practice for Constant-Amplitute Low Cycle
A Hybrid Approach on Tourism Demand Forecasting
NASA Astrophysics Data System (ADS)
Nor, M. E.; Nurul, A. I. M.; Rusiman, M. S.
2018-04-01
Tourism has become one of the important industries that contributes to the country’s economy. Tourism demand forecasting gives valuable information to policy makers, decision makers and organizations related to tourism industry in order to make crucial decision and planning. However, it is challenging to produce an accurate forecast since economic data such as the tourism data is affected by social, economic and environmental factors. In this study, an equally-weighted hybrid method, which is a combination of Box-Jenkins and Artificial Neural Networks, was applied to forecast Malaysia’s tourism demand. The forecasting performance was assessed by taking the each individual method as a benchmark. The results showed that this hybrid approach outperformed the other two models
Robust matching for voice recognition
NASA Astrophysics Data System (ADS)
Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.
1994-10-01
This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.
Assessing aspects of creativity in deaf and hearing high school students.
Stanzione, Christopher M; Perez, Susan M; Lederberg, Amy R
2013-04-01
To address the paucity of current research on the development of creativity in deaf students, and to extend existing research to adolescents, the present study investigated divergent thinking, a method of assessing creativity, in both deaf and hearing adolescents. We assessed divergent thinking in two domains, figural and verbal, while also adjusting the instructional method in written format, sign language, or spoken English. Deaf students' performance was equal to, or more creative than, hearing students on the figural assessment of divergent thinking, but less creative on the verbal assessment. Additional studies should be conducted to determine whether this was an anomalous finding or one that might contribute to hypotheses yielding effective interventions.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Determination of Hg concentration in gases by PIXE
NASA Astrophysics Data System (ADS)
Dutkiewicz, E.; van Kuijen, W. J. P.; Munnik, F.; Mutsaers, P. H. A.; Rokita, E.; de Voigt, M. J. A.
1992-05-01
A method for determination of the concentration of mercury in the gaseous phase is described. In the first step of the method a stable sulphur-mercury complex is formed. For this purpose sulphur is deposited on a filter and the investigated gas flows through the filter. Millipore filters and the deposition of sulphur from Na2S2O3 * 5H2O solution were found to be most suitable. The amount of Hg absorbed on the filter was determined by PIXE or by NAA in the second step of the method. An optimization of proton energy was performed in the PIXE analysis to obtain the maximal signal-to-background ratio. The detection limit of the method, expressed as the minimal amount of Hg which has to flow through the filter equals to 30 and 2 ng for PIXE and NAA techniques, respectively. Applications of the method are also described.
NASA Astrophysics Data System (ADS)
Sierra Villar, Ana M.; Calpena Campmany, Ana C.; Bellowa, Lyda Halbaut; Trenchs, Monserrat Aróztegui; Naveros, Beatriz Clares
2013-09-01
A spectrofluorometric method has been developed and validated for the determination of gemfibrozil. The method is based on the excitation and emission capacities of gemfibrozil with excitation and emission wavelengths of 276 and 304 nm respectively. This method allows de determination of the drug in a self-nanoemulsifying drug delivery system (SNEDDS) for improve its intestinal absorption. Results obtained showed linear relationships with good correlation coefficients (r2 > 0.999) and low limits of detection and quantification (LOD of 0.075 μg mL-1 and LOQ of 0.226 μg mL-1) in the range of 0.2-5 μg mL-1, equally this method showed a good robustness and stability. Thus the amounts of gemfibrozil released from SNEDDS contained in gastro resistant hard gelatine capsules were analysed, and release studies could be performed satisfactorily.
Sierra Villar, Ana M; Calpena Campmany, Ana C; Bellowa, Lyda Halbaut; Trenchs, Monserrat Aróztegui; Naveros, Beatriz Clares
2013-09-01
A spectrofluorometric method has been developed and validated for the determination of gemfibrozil. The method is based on the excitation and emission capacities of gemfibrozil with excitation and emission wavelengths of 276 and 304 nm respectively. This method allows de determination of the drug in a self-nanoemulsifying drug delivery system (SNEDDS) for improve its intestinal absorption. Results obtained showed linear relationships with good correlation coefficients (r(2)>0.999) and low limits of detection and quantification (LOD of 0.075 μg mL(-1) and LOQ of 0.226 μg mL(-1)) in the range of 0.2-5 μg mL(-1), equally this method showed a good robustness and stability. Thus the amounts of gemfibrozil released from SNEDDS contained in gastro resistant hard gelatine capsules were analysed, and release studies could be performed satisfactorily. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Feldt, Leonard S.
2011-01-01
This article presents a simple, computer-assisted method of determining the extent to which increases in reliability increase the power of the "F" test of equality of means. The method uses a derived formula that relates the changes in the reliability coefficient to changes in the noncentrality of the relevant "F" distribution. A readily available…
Advanced digital signal processing for short-haul and access network
NASA Astrophysics Data System (ADS)
Zhang, Junwen; Yu, Jianjun; Chi, Nan
2016-02-01
Digital signal processing (DSP) has been proved to be a successful technology recently in high speed and high spectrum-efficiency optical short-haul and access network, which enables high performances based on digital equalizations and compensations. In this paper, we investigate advanced DSP at the transmitter and receiver side for signal pre-equalization and post-equalization in an optical access network. A novel DSP-based digital and optical pre-equalization scheme has been proposed for bandwidth-limited high speed short-distance communication system, which is based on the feedback of receiver-side adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi-modulus algorithms (CMA, MMA). Based on this scheme, we experimentally demonstrate 400GE on a single optical carrier based on the highest ETDM 120-GBaud PDM-PAM-4 signal, using one external modulator and coherent detection. A line rate of 480-Gb/s is achieved, which enables 20% forward-error correction (FEC) overhead to keep the 400-Gb/s net information rate. The performance after fiber transmission shows large margin for both short range and metro/regional networks. We also extend the advanced DSP for short haul optical access networks by using high order QAMs. We propose and demonstrate a high speed multi-band CAP-WDM-PON system on intensity modulation, direct detection and digital equalizations. A hybrid modified cascaded MMA post-equalization schemes are used to equalize the multi-band CAP-mQAM signals. Using this scheme, we successfully demonstrates 550Gb/s high capacity WDMPON system with 11 WDM channels, 55 sub-bands, and 10-Gb/s per user in the downstream over 40-km SMF.
NASA Astrophysics Data System (ADS)
Liao, Zangyi
2017-12-01
Accomplishing the regional equalization of basic public service supply among the provinces in China is an important objective that can promote the people’s livelihood construction. In order to measure the problem which is about the non-equalization of basic public service supply, this paper takes these aspects as the first index, such as the infrastructure construction, basic education services, public employment services, public health service and social security service. At the same time, it cooperates with 16 index as the second index to construct the performance evaluation systems, and then use the Theil index to evaluate the performance in provinces that using the panel data from the year 2000 to 2012.
A contrast enhancement method for improving the segmentation of breast lesions on ultrasonography.
Flores, Wilfrido Gómez; Pereira, Wagner Coelho de Albuquerque
2017-01-01
This paper presents an adaptive contrast enhancement method based on sigmoidal mapping function (SACE) used for improving the computerized segmentation of breast lesions on ultrasound. First, from the original ultrasound image an intensity variation map is obtained, which is used to generate local sigmoidal mapping functions related to distinct contextual regions. Then, a bilinear interpolation scheme is used to transform every original pixel to a new gray level value. Also, four contrast enhancement techniques widely used in breast ultrasound enhancement are implemented: histogram equalization (HEQ), contrast limited adaptive histogram equalization (CLAHE), fuzzy enhancement (FEN), and sigmoid based enhancement (SEN). In addition, these contrast enhancement techniques are considered in a computerized lesion segmentation scheme based on watershed transformation. The performance comparison among techniques is assessed in terms of both the quality of contrast enhancement and the segmentation accuracy. The former is quantified by the measure, where the greater the value, the better the contrast enhancement, whereas the latter is calculated by the Jaccard index, which should tend towards unity to indicate adequate segmentation. The experiments consider a data set with 500 breast ultrasound images. The results show that SACE outperforms its counterparts, where the median values for the measure are: SACE: 139.4, SEN: 68.2, HEQ: 64.1, CLAHE: 62.8, and FEN: 7.9. Considering the segmentation performance results, the SACE method presents the largest accuracy, where the median values for the Jaccard index are: SACE: 0.81, FEN: 0.80, CLAHE: 0.79, HEQ: 77, and SEN: 0.63. The SACE method performs well due to the combination of three elements: (1) the intensity variation map reduces intensity variations that could distort the real response of the mapping function, (2) the sigmoidal mapping function enhances the gray level range where the transition between lesion and background is found, and (3) the adaptive enhancing scheme for coping with local contrasts. Hence, the SACE approach is appropriate for enhancing contrast before computerized lesion segmentation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lee-Lewandrowski, Elizabeth; Januzzi, James L; Grisson, Ricky; Mohammed, Asim A; Lewandrowski, Grant; Lewandrowski, Kent
2011-04-01
Previous studies evaluating point-of-care testing (POCT) for cardiac biomarkers did not use current recommendations for troponin cutoff values or recognize the recent universal definition of acute myocardial infarction. Traditionally, achieving optimal sensitivity for the detection of myocardial injury on initial presentation required combining cardiac troponin and/or creatine kinase isoenzyme MB with an early marker, usually myoglobin. In recent years, the performance of central laboratory combining cardiac troponin assays has improved significantly, potentially obviating the need for a multimarker panel to achieve optimum sensitivity. To compare 2 commonly used POCT strategies to a fourth generation, central laboratory cardiac troponin T assay on first-draw specimens from patients being evaluated for acute myocardial infarction in the emergency department. The 2 strategies included a traditional POCT multimarker panel and a newer POCT method using cardiac troponin I alone. Blood specimens from 204 patients presenting to the emergency department with signs and/or symptoms of myocardial ischemia were measured on the 2 POCT systems and by a central laboratory method. The diagnosis for each patient was determined by retrospective chart review. The cardiac troponin T assasy alone was more sensitive for acute myocardial infarction than the multimarker POCT panel with equal or better specificity. When compared with a POCT troponin I, the cardiac troponin T was also more sensitive, but this difference was not significant. The POCT troponin I alone also had the same sensitivity as the multimarker panel. Testing for combining cardiac troponin alone using newer, commercially available, central laboratory or POCT assays performed with equal or greater sensitivity to acute myocardial infarction as the older, traditional, multimarker panel. In the near future, high-sensitivity, central laboratory troponins will be available for routine clinical use. As a result, the quality gap between central laboratories and older POCT methods will continue to widen, unless the performance of the POCT methods is improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.
2016-02-16
Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.
2016-03-01
Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less
METHOD OF OPERATING NUCLEAR REACTORS
Untermyer, S.
1958-10-14
A method is presented for obtaining enhanced utilization of natural uranium in heavy water moderated nuclear reactors by charging the reactor with an equal number of fuel elements formed of natural uranium and of fuel elements formed of uranium depleted in U/sup 235/ to the extent that the combination will just support a chain reaction. The reactor is operated until the rate of burnup of plutonium equals its rate of production, the fuel elements are processed to recover plutonium, the depleted uranium is discarded, and the remaining uranium is formed into fuel elements. These fuel elements are charged into a reactor along with an equal number of fuel elements formed of uranium depleted in U/sup 235/ to the extent that the combination will just support a chain reaction, and reuse of the uranium is continued as aforesaid until it wlll no longer support a chain reaction when combined with an equal quantity of natural uranium.
Perfect harmony: A mathematical analysis of four historical tunings
NASA Astrophysics Data System (ADS)
Page, Michael F.
2004-10-01
In Western music, a musical interval defined by the frequency ratio of two notes is generally considered consonant when the ratio is composed of small integers. Perfect harmony or an ``ideal just scale,'' which has no exact solution, would require the division of an octave into 12 notes, each of which would be used to create six other consonant intervals. The purpose of this study is to analyze four well-known historical tunings to evaluate how well each one approximates perfect harmony. The analysis consists of a general evaluation in which all consonant intervals are given equal weighting and a specific evaluation for three preludes from Bach's ``Well-Tempered Clavier,'' for which intervals are weighted in proportion to the duration of their occurrence. The four tunings, 5-limit just intonation, quarter-comma meantone temperament, well temperament (Werckmeister III), and equal temperament, are evaluated by measures of centrality, dispersion, distance, and dissonance. When all keys and consonant intervals are equally weighted, equal temperament demonstrates the strongest performance across a variety of measures, although it is not always the best tuning. Given C as the starting note for each tuning, equal temperament and well temperament perform strongly for the three ``Well-Tempered Clavier'' preludes examined. .
Weber, Shannon
2015-01-01
I analyze three case studies of marriage equality activism and marriage equality-based groups after the passage of Proposition 8 in California. Evaluating the JoinTheImpact protests of 2008, the LGBTQ rights group GetEQUAL, and the group One Struggle One Fight, I argue that these groups revise queer theoretical arguments about marriage equality activism as by definition assimilationist, homonormative, and single-issue. In contrast to such claims, the cases studied here provide a snapshot of heterogeneous, intersectional, and coalition-based social justice work in which creative methods of protest, including direct action and flash mobs, are deployed in militant ways for marriage rights and beyond.
Huang, C.; Townshend, J.R.G.
2003-01-01
A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.
Pay Equity: A Civil Rights Issue.
ERIC Educational Resources Information Center
Murphy, Joseph S.
1987-01-01
The market principle has not worked. Women have long performed work of equal demand as men, but have not been equally compensated for it. Consitutional law prohibits such wage inequities. Society's resources must be more equitably allocated to make up for and correct that unequal treatment. (PS)
Code of Federal Regulations, 2010 CFR
2010-10-01
... employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort... Office of the Secretary of Transportation NONDISCRIMINATION ON THE BASIS OF SEX IN EDUCATION PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Discrimination on the Basis of Sex in Employment in...
22 CFR 146.515 - Compensation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... employees of the opposite sex for equal work on jobs the performance of which requires equal skill, effort... Relations DEPARTMENT OF STATE CIVIL RIGHTS NONDISCRIMINATION ON THE BASIS OF SEX IN EDUCATION PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Discrimination on the Basis of Sex in Employment in...
Powell, Sarah R.; Fuchs, Lynn S.
2010-01-01
Elementary school students often misinterpret the equal sign (=) as an operational rather than a relational symbol. Such misunderstanding is problematic because solving equations with missing numbers may be important for higher-order mathematics skills including word problems. Research indicates equal-sign instruction can alter how typically-developing students use the equal sign, but no study has examined effects for students with mathematics difficulty (MD) or how equal-sign instruction contributes to word-problem skill for students with or without MD. The present study assessed the efficacy of equal-sign instruction within word-problem tutoring. Third-grade students with MD (n = 80) were assigned to word-problem tutoring, word-problem tutoring plus equal-sign instruction (combined) tutoring, or no-tutoring control. Combined tutoring produced better improvement on equal sign tasks and open equations compared to the other 2 conditions. On certain forms of word problems, combined tutoring but not word-problem tutoring alone produced better improvement than control. When compared at posttest to 3rd-grade students without MD on equal sign tasks and open equations, only combined tutoring students with MD performed comparably. PMID:20640240
The Global Inventor Gap: Distribution and Equality of World-Wide Inventive Effort, 1990–2010
Toivanen, Hannes; Suominen, Arho
2015-01-01
Applying distance-to-frontier analysis, we have used 2.9 million patents and population data to assess whether the relative capacity of world countries and major regions to create new knowledge and technology has become globally more equal or less equal between 1990 and 2010. We show with the Gini coefficient that the global distribution of inventors has become more equal between major countries and regions. However, this trend has been largely due to the improved performance of only two major countries, China and India. The worst performing regions, totalling a population of almost 2 billion, are actually falling behind. Our results suggest that substantial parts of the global population have fallen further behind countries at the global frontier in their ability to create new knowledge and inventions, and that the catch-up among the least developed and middle-income countries is highly uneven, prompting questions about the nature and future of the global knowledge economy. PMID:25849202
EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S
To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item withinmore » a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.« less
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
The Effect of Tutoring With Nonstandard Equations for Students With Mathematics Difficulty.
Powell, Sarah R; Driver, Melissa K; Julian, Tyler E
2015-01-01
Students often misinterpret the equal sign (=) as operational instead of relational. Research indicates misinterpretation of the equal sign occurs because students receive relatively little exposure to equations that promote relational understanding of the equal sign. No study, however, has examined effects of nonstandard equations on the equation solving and equal-sign understanding of students with mathematics difficulty (MD). In the present study, second-grade students with MD (n = 51) were randomly assigned to standard equations tutoring, combined tutoring (standard and nonstandard equations), and no-tutoring control. Combined tutoring students demonstrated greater gains on equation-solving assessments and equal-sign tasks compared to the other two conditions. Standard tutoring students demonstrated improved skill on equation solving over control students, but combined tutoring students' performance gains were significantly larger. Results indicate that exposure to and practice with nonstandard equations positively influence student understanding of the equal sign. © Hammill Institute on Disabilities 2013.
A spectral X-ray CT simulation study for quantitative determination of iron
NASA Astrophysics Data System (ADS)
Su, Ting; Kaftandjian, Valérie; Duvauchelle, Philippe; Zhu, Yuemin
2018-06-01
Iron is an essential element in the human body and disorders in iron such as iron deficiency or overload can cause serious diseases. This paper aims to explore the ability of spectral X-ray CT to quantitatively separate iron from calcium and potassium and to investigate the influence of different acquisition parameters on material decomposition performance. We simulated spectral X-ray CT imaging of a PMMA phantom filled with iron, calcium, and potassium solutions at various concentrations (15-200 mg/cc). Different acquisition parameters were considered, such as the number of energy bins (6, 10, 15, 20, 30, 60) and exposure factor per projection (0.025, 0.1, 1, 10, 100 mA s). Based on the simulation data, we investigated the performance of two regularized material decomposition approaches: projection domain method and image domain method. It was found that the former method discriminated iron from calcium, potassium and water in all cases and tended to benefit from lower number of energy bins for lower exposure factor acquisition. The latter method succeeded in iron determination only when the number of energy bins equals 60, and in this case, the contrast-to-noise ratios of the decomposed iron images are higher than those obtained using the projection domain method. The results demonstrate that both methods are able to discriminate and quantify iron from calcium, potassium and water under certain conditions. Their performances vary with the acquisition parameters of spectral CT. One can use one method or the other to benefit better performance according to the data available.
Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An
2016-05-01
To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P < 0.05) with ROI sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taslakian, Bedros, E-mail: btaslakian@gmail.com; Sebaaly, Mikhael Georges, E-mail: ms246@aub.edu.lb; Al-Kutoubi, Aghiad, E-mail: mk00@aub.edu.lb
2016-04-15
Performing an interventional procedure imposes a commitment on interventional radiologists to conduct the initial patient assessment, determine the best course of therapy, and provide long-term care. Patient care before and after an interventional procedure, identification, and management of early and delayed complications of various procedures are equal in importance to the procedure itself. In this second part, we complete the comprehensive, methodical review of pre-procedural care and patient preparation before vascular and interventional radiology procedures.
Taslakian, Bedros; Sebaaly, Mikhael Georges; Al-Kutoubi, Aghiad
2016-04-01
Performing an interventional procedure imposes a commitment on interventional radiologists to conduct the initial patient assessment, determine the best course of therapy, and provide long-term care. Patient care before and after an interventional procedure, identification, and management of early and delayed complications of various procedures are equal in importance to the procedure itself. In this second part, we complete the comprehensive, methodical review of pre-procedural care and patient preparation before vascular and interventional radiology procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frazin, Richard A., E-mail: rfrazin@umich.edu
2013-04-10
Heretofore, the literature on exoplanet detection with coronagraphic telescope systems has paid little attention to the information content of short exposures and methods of utilizing the measurements of adaptive optics wavefront sensors. This paper provides a framework for the incorporation of the wavefront sensor measurements in the context of observing modes in which the science camera takes millisecond exposures. In this formulation, the wavefront sensor measurements provide a means to jointly estimate the static speckle and the planetary signal. The ability to estimate planetary intensities in as little as a few seconds has the potential to greatly improve the efficiencymore » of exoplanet search surveys. For simplicity, the mathematical development assumes a simple optical system with an idealized Lyot coronagraph. Unlike currently used methods, in which increasing the observation time beyond a certain threshold is useless, this method produces estimates whose error covariances decrease more quickly than inversely proportional to the observation time. This is due to the fact that the estimates of the quasi-static aberrations are informed by a new random (but approximately known) wavefront every millisecond. The method can be extended to include angular (due to diurnal field rotation) and spectral diversity. Numerical experiments are performed with wavefront data from the AEOS Adaptive Optics System sensing at 850 nm. These experiments assume a science camera wavelength {lambda} of 1.1 {mu}, that the measured wavefronts are exact, and a Gaussian approximation of shot-noise. The effects of detector read-out noise and other issues are left to future investigations. A number of static aberrations are introduced, including one with a spatial frequency exactly corresponding the planet location, which was at a distance of Almost-Equal-To 3{lambda}/D from the star. Using only 4 s of simulated observation time, a planetary intensity, of Almost-Equal-To 1 photon ms{sup -1}, and a stellar intensity of Almost-Equal-To 10{sup 5} photons ms{sup -1} (contrast ratio 10{sup 5}), the short-exposure estimation method recovers the amplitudes' static aberrations with 1% accuracy, and the planet brightness with 20% accuracy.« less
Particulate-matter content of 11 cephalosporin injections: conformance with USP limits.
Parkins, D A; Taylor, A J
1987-05-01
The particulate-matter content of 11 dry-powder cephalosporin injections was determined using a modified version of the official United States Pharmacopeial Convention (USP) method for particulate matter in small-volume injections (SVIs). Ten vials of each cephalosporin product were each constituted with 10 mL of Water for Injections BP that had been filtered through a 0.22-micron membrane. The pooled contents of the 10 vials for each product were allowed to stand under reduced pressure to ensure removal of gas bubbles. Particulate-matter content was determined using a HIAC/Royco particle counter on six 10-mL samples obtained from the pooled solutions for each product. All solution preparation and particle counting was performed in a horizontal-laminar-airflow hood. Modifications of the USP method used in this study included the use of six rather than two samples from each pooled solution, the addition of diluent to the injections through the rubber closure with a needle instead of into the open container, and changes in the degassing method. Particle counts for all products examined were lower than USP limits for SVIs. All but two products contained less than 15% of USP limits for particles greater than or equal to 10 microns in effective diameter and particles greater than or equal to 25 microns in effective diameter. The standard USP method for degassing (standing for two minutes) was inadequate. Application of reduced pressure for up to 10 minutes was necessary for thorough degassing of products.(ABSTRACT TRUNCATED AT 250 WORDS)
75 FR 80117 - Methods for Measurement of Filterable PM10
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-21
...This action promulgates amendments to Methods 201A and 202. The final amendments to Method 201A add a particle-sizing device to allow for sampling of particulate matter with mean aerodynamic diameters less than or equal to 2.5 micrometers (PM2.5 or fine particulate matter). The final amendments to Method 202 revise the sample collection and recovery procedures of the method to reduce the formation of reaction artifacts that could lead to inaccurate measurements of condensable particulate matter. Additionally, the final amendments to Method 202 eliminate most of the hardware and analytical options in the existing method, thereby increasing the precision of the method and improving the consistency in the measurements obtained between source tests performed under different regulatory authorities. This action also announces that EPA is taking no action to affect the already established January 1, 2011 sunset date for the New Source Review (NSR) transition period, during which EPA is not requiring that State NSR programs address condensable particulate matter emissions.
Evaluation of a newly developed media-supported 4-step approach for basic life support training
2012-01-01
Objective The quality of external chest compressions (ECC) is of primary importance within basic life support (BLS). Recent guidelines delineate the so-called 4“-step approach” for teaching practical skills within resuscitation training guided by a certified instructor. The objective of this study was to evaluate whether a “media-supported 4-step approach” for BLS training leads to equal practical performance compared to the standard 4-step approach. Materials and methods After baseline testing, 220 laypersons were either trained using the widely accepted method for resuscitation training (4-step approach) or using a newly created “media-supported 4-step approach”, both of equal duration. In this approach, steps 1 and 2 were ensured via a standardised self-produced podcast, which included all of the information regarding the BLS algorithm and resuscitation skills. Participants were tested on manikins in the same mock cardiac arrest single-rescuer scenario prior to intervention, after one week and after six months with respect to ECC-performance, and participants were surveyed about the approach. Results Participants (age 23 ± 11, 69% female) reached comparable practical ECC performances in both groups, with no statistical difference. Even after six months, there was no difference detected in the quality of the initial assessment algorithm or delay concerning initiation of CPR. Overall, at least 99% of the intervention group (n = 99; mean 1.5 ± 0.8; 6-point Likert scale: 1 = completely agree, 6 = completely disagree) agreed that the video provided an adequate introduction to BLS skills. Conclusions The “media-supported 4-step approach” leads to comparable practical ECC-performance compared to standard teaching, even with respect to retention of skills. Therefore, this approach could be useful in special educational settings where, for example, instructors’ resources are sparse or large-group sessions have to be prepared. PMID:22647148
Teh, V; Sim, K S; Wong, E K
2016-11-01
According to the statistic from World Health Organization (WHO), stroke is one of the major causes of death globally. Computed tomography (CT) scan is one of the main medical diagnosis system used for diagnosis of ischemic stroke. CT scan provides brain images in Digital Imaging and Communication in Medicine (DICOM) format. The presentation of CT brain images is mainly relied on the window setting (window center and window width), which converts an image from DICOM format into normal grayscale format. Nevertheless, the ordinary window parameter could not deliver a proper contrast on CT brain images for ischemic stroke detection. In this paper, a new proposed method namely gamma correction extreme-level eliminating with weighting distribution (GCELEWD) is implemented to improve the contrast on CT brain images. GCELEWD is capable of highlighting the hypodense region for diagnosis of ischemic stroke. The performance of this new proposed technique, GCELEWD, is compared with four of the existing contrast enhancement technique such as brightness preserving bi-histogram equalization (BBHE), dualistic sub-image histogram equalization (DSIHE), extreme-level eliminating histogram equalization (ELEHE), and adaptive gamma correction with weighting distribution (AGCWD). GCELEWD shows better visualization for ischemic stroke detection and higher values with image quality assessment (IQA) module. SCANNING 38:842-856, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Low-Coherence light source design for ESPI in-plane displacement measurements
NASA Astrophysics Data System (ADS)
Heikkinen, J. J.; Schajer, G. S.
2018-01-01
The ESPI method for surface deformation measurements requires the use of a light source with high coherence length to accommodate the optical path length differences present in the apparatus. Such high-coherence lasers, however, are typically large, delicate and costly. Laser diodes, on the other hand, are compact, mechanically robust and inexpensive, but unfortunately they have short coherence length. The present work aims to enable the use of a laser diode as an illumination source by equalizing the path lengths within an ESPI interferometer. This is done by using a reflection type diffraction grating to compensate for the path length differences. The high optical power efficiency of such diffraction gratings allows the use of much lower optical power than in previous interferometer designs using transmission gratings. The proposed concept was experimentally investigated by doing in-plane ESPI measurements using a high-coherence single longitudinal mode (SLM) laser, a laser diode and then a laser diode with path length optimization. The results demonstrated the limitations of using an uncompensated laser diode. They then showed the effectiveness of adding a reflection type diffraction grating to equalize the interferometer path lengths. This addition enabled the laser diode to produce high measurement quality across the entire field of view, rivaling although not quite equaling the performance of a high-coherence SLM laser source.
Automatic latency equalization in VHDL-implemented complex pipelined systems
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.
2016-09-01
In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.
Development of a New Paradigm for Analysis of Disdrometric Data
NASA Astrophysics Data System (ADS)
Larsen, Michael L.; Kostinski, Alexander B.
2017-04-01
A number of disdrometers currently on the market are able to characterize hydrometeors on a drop-by-drop basis with arrival timestamps associated with each arriving hydrometeor. This allows an investigator to parse a time series into disjoint intervals that have equal numbers of drops, instead of the traditional subdivision into equal time intervals. Such a "fixed-N" partitioning of the data can provide several advantages over the traditional equal time binning method, especially within the context of quantifying measurement uncertainty (which typically scales with the number of hydrometeors in each sample). An added bonus is the natural elimination of measurements that are devoid of all drops. This analysis method is investigated by utilizing data from a dense array of disdrometers located near Charleston, South Carolina, USA. Implications for the usefulness of this method in future studies are explored.
Soper, Tracey
2017-04-01
The aim of this quantitative experimental study was to examine which of three instructional methodologies of traditional lecture, online electronic learning (e-learning) and self-study take-home packets are effective in knowledge acquisition of professional registered nurses. A true experimental design was conducted to contrast the knowledge acquisition of 87 registered nurses randomly selected. A 40-item Acute Coronary Syndrome (ACS) true/false test was used to measure knowledge acquisition. Based on 0.05 significance level, the ANOVA test revealed that there was no difference in knowledge acquisition by registered nurses based on which of three learning instructional method they were assigned. It can be concluded that while all of these instructional methods were equally effective in knowledge acquisition, these methods may not be equally cost- and time-effective. The study was able to determine that there were no significant differences in knowledge acquisition of nurses between the three instructional methodologies. The study also found that all groups scored at the acceptable level for certification. It can be concluded that all of these instructional methods were equally effective in knowledge acquisition but are not equally cost- and time-effective. Therefore, hospital educators may wish to formulate policies regarding choice of instructional method that take into account the efficient use of nurses' time and institutional resources.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
NASA Astrophysics Data System (ADS)
Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah
2017-03-01
Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng, S.
2009-01-01
For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.
Channel estimation in few mode fiber mode division multiplexing transmission system
NASA Astrophysics Data System (ADS)
Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming
2018-03-01
It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.
Kupinski, M. K.; Clarkson, E.
2015-01-01
We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764
Modified gastroduodenostomy in laparoscopy-assisted distal gastrectomy: a 'tornado' anastomosis.
Kubota, Keisuke; Kuroda, Junko; Yoshida, Masashi; Okada, Akihiro; Nitori, Nobuhiro; Kitajima, Masaki
2013-01-01
This study was to examine the utility of a modified double-stapling end-to-end gastroduodenostomy method ('Tornado' anastomosis) compared to a method with an additional gastrotomy ('Anterior Incision' method) in laparoscopy-assisted distal gastrectomy. Forty-two patients with gastric cancer who underwent laparoscopy-assisted distal gastrectomy were analyzed retrospectively. Billroth-I using an additional gastrotomy was performed in 24 patients (AI group) and Billroth-I without an additional gastrotomy was performed in 18 (TOR group). Clinicopathological features, operative outcomes (lymph node dissection, operative time, operative blood loss) and postoperative outcomes (complications, postoperative hospital stay, and body weight loss at one year after surgery) were evaluated and compared between groups. Operative time was significantly shorter in the TOR group (251 min) than in the AI group (282 min) (p < 0.01). There were no statistically significant differences in operative blood loss, postoperative complications, and hospital stay between the 2 study groups. Body weight loss at one year after surgery was -5.8 kg in the TOR group and -6.5 kg in the AI group, without a statistically significant difference. Completion time for Billroth-I anastomosis was significantly shorter with Tornado anastomosis than with the Anterior Incision method, with safety equal between the two methods.
Quasi-optical reflective polarimeter for wide millimeter-wave band
NASA Astrophysics Data System (ADS)
Shinnaga, Hiroko; Tsuboi, Masato; Kasuga, Takashi
1998-11-01
We constructed a new reflective-type polarimeter system at 35 - 250 GHz for the 45 m telescope at Nobeyama Radio Observatory (NRO). Using the system, we can measure both linear polarization and circular polarization for our needs. The new system has two key points. First is that we can tune the center frequency of the polarimeter in the available frequency range, second is that insertion loss is low (0.15 plus or minus 0.03 dB at 86 GHz). These characteristics extended achievable scientific aims. In this paper, we present the design and the performance of the system. Using the system, we measured linear polarizations of some astronomical objects at 86 GHz, with SiO (nu) equals 0,1 and 2 at J equals 2 - 1 and 29SiO (nu) equals 0 J equals 2 - 1 simultaneously. As a result, the observation revealed SiO (nu) equals 0 J equals 2 - 1 of VY Canis Majoris is highly linearly polarized, the degree of linear polarization is up to 64%, in spite of SiO J equals 2 - 1 (nu) equals 1 is not highly linearly polarized. The highly linearly polarized feature is a strong evidence that 28SiO J equals 2 - 1 transition at the ground vibrational state originate through maser action. This is the first detection of the cosmic maser emission of SiO (nu) equals 0 J equals 2 - 1 transition.
NASA Astrophysics Data System (ADS)
Teter, Andrzej; Kolakowski, Zbigniew
2018-01-01
The numerical modelling of a plate structure was performed with the finite element method and a one-mode approach based on Koiter's method. The first order approximation of Koiter's method enables one to solve the eigenvalue problem. The second order approximation describes post-buckling equilibrium paths. In the finite element analysis, the Lanczos method was used to solve the linear problem of buckling. Simulations of the non-linear problem were performed with the Newton-Raphson method. Detailed calculations were carried out for a short Z-column made of general laminates. Configurations of laminated layers were non-symmetric. Due to possibilities of its application, the general laminate is very interesting. The length of the samples was chosen to obtain the lowest value of local buckling load. The amplitude of initial imperfections was 10% of the wall thickness. Thin-walled structures were simply supported on both ends. The numerical results were verified in experimental tests. A strain-gauge technique was applied. A static compression test was performed on a universal testing machine and a special grip, which consisted of two rigid steel plates and clamping sleeves, was used. Specimens were obtained with an autoclave technique. Tests were performed at a constant velocity of the cross-bar equal to 2 mm/min. The compressive load was less than 150% of the bifurcation load. Additionally, soft and thin pads were used to reduce inaccuracy of the sample ends.
Study of a co-designed decision feedback equalizer, deinterleaver, and decoder
NASA Technical Reports Server (NTRS)
Peile, Robert E.; Welch, Loyd
1990-01-01
A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.
Race Equality Scheme 2005-2008
ERIC Educational Resources Information Center
Her Majesty's Inspectorate of Education, 2005
2005-01-01
Her Majesty's Inspectorate of Education (HMIE) is strongly committed to promoting race equality in the way that HMIE staff go about performing their role within Scottish education. Scottish society reflects cultural, ethnic, religious and linguistic diversity and Scottish education should be accessible to all. No-one should be disadvantaged or…
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal to an integral transform of the original projection data with the kernel function being the Abel transform of the filtering function. The kernel function is independent of the projection data and can be obtained separately when the filtering function is selected. Users can select the best filtering function for a particular set of experimental data. When the kernal function is obtained, it can be used repeatedly to a number of projection data sets (rovs) from the same experiment. When an entire flame image that contains a large number of projection lines needs to be processed, the new approach significantly reduces computational effort in comparison with the conventional approach in which each projection data set is deconvoluted separately. Computer codes have been developed to perform the filter Abel transform for an entire flame field. Measured soot volume fraction data of a jet diffusion flame are processed as an example.
Sörlin, Ann; Lindholm, Lars; Ng, Nawi; Ohman, Ann
2011-08-26
Men and women have different patterns of health. These differences between the sexes present a challenge to the field of public health. The question why women experience more health problems than men despite their longevity has been discussed extensively, with both social and biological theories being offered as plausible explanations. In this article, we focus on how gender equality in a partnership might be associated with the respondents' perceptions of health. This study was a cross-sectional survey with 1400 respondents. We measured gender equality using two different measures: 1) a self-reported gender equality index, and 2) a self-perceived gender equality question. The aim of comparison of the self-reported gender equality index with the self-perceived gender equality question was to reveal possible disagreements between the normative discourse on gender equality and daily practice in couple relationships. We then evaluated the association with health, measured as self-rated health (SRH). With SRH dichotomized into 'good' and 'poor', logistic regression was used to assess factors associated with the outcome. For the comparison between the self-reported gender equality index and self-perceived gender equality, kappa statistics were used. Associations between gender equality and health found in this study vary with the type of gender equality measurement. Overall, we found little agreement between the self-reported gender equality index and self-perceived gender equality. Further, the patterns of agreement between self-perceived and self-reported gender equality were quite different for men and women: men perceived greater gender equality than they reported in the index, while women perceived less gender equality than they reported. The associations to health were depending on gender equality measurement used. Men and women perceive and report gender equality differently. This means that it is necessary not only to be conscious of the methods and measurements used to quantify men's and women's opinions of gender equality, but also to be aware of the implications for health outcomes.
Using Whispering-Gallery-Mode Resonators for Refractometry
NASA Technical Reports Server (NTRS)
Matsko, Andrey; Savchenkov, Anatoliy; Strekalov, Dmitry; Iltchenko, Vladimir; Maleki, Lute
2010-01-01
A method of determining the refractive and absorptive properties of optically transparent materials involves a combination of theoretical and experimental analysis of electromagnetic responses of whispering-gallery-mode (WGM) resonator disks made of those materials. The method was conceived especially for use in studying transparent photorefractive materials, for which purpose this method affords unprecedented levels of sensitivity and accuracy. The method is expected to be particularly useful for measuring temporally varying refractive and absorptive properties of photorefractive materials at infrared wavelengths. Still more particularly, the method is expected to be useful for measuring drifts in these properties that are so slow that, heretofore, the properties were assumed to be constant. The basic idea of the method is to attempt to infer values of the photorefractive properties of a material by seeking to match (1) theoretical predictions of the spectral responses (or selected features thereof) of a WGM of known dimensions made of the material with (2) the actual spectral responses (or selected features thereof). Spectral features that are useful for this purpose include resonance frequencies, free spectral ranges (differences between resonance frequencies of adjacently numbered modes), and resonance quality factors (Q values). The method has been demonstrated in several experiments, one of which was performed on a WGM resonator made from a disk of LiNbO3 doped with 5 percent of MgO. The free spectral range of the resonator was approximately equal to 3.42 GHz at wavelengths in the vicinity of 780 nm, the smallest full width at half maximum of a mode was approximately equal to 50 MHz, and the thickness of the resonator in the area of mode localization was 30 microns. In the experiment, laser power of 9 mW was coupled into the resonator with an efficiency of 75 percent, and the laser was scanned over a frequency band 9 GHz wide at a nominal wavelength of approximately equal to 780 nm. Resonance frequencies were measured as functions of time during several hours of exposure to the laser light. The results of these measurements, plotted in the figure, show a pronounced collective frequency drift of the resonator modes. The size of the drift has been estimated to correspond to a change of 8.5 x 10(exp -5) in the effective ordinary index of refraction of the resonator material.
Medical adherence to acne therapy: a systematic review.
Snyder, Stephanie; Crandell, Ian; Davis, Scott A; Feldman, Steven R
2014-04-01
Poor adherence of acne patients to treatment may equate to poor clinical efficacy, increased healthcare costs, and unnecessary treatments. Authors have investigated risk factors for poor medical adherence and how to improve this difficult problem in the context of acne. This systematic review aims to describe what methods have been used to measure adherence, what is known about acne patients' adherence to treatment, and the factors affecting adherence. A MEDLINE search was performed for randomized controlled trials published between 1978 and June 2013, focusing on patient adherence to prescribed acne medications. A test for equality of proportions was performed on studies of similar design to collectively analyze adherence to oral versus topical medication. The self-reported adherence data collected from these clinical trials were then compared with adherence data from a pharmacy database study. Studies varied in modalities of data collection, but the majority utilized subjective methods. Topical therapies were more often studied than oral. The overall oral adherence rate, as calculated by a test of equality of proportions, was 76.3%, while the overall topical adherence rate was 75.8% (p=0.927). The occurrence of side effects and young age were cited as the top reasons for poor adherence, followed by forgetfulness. The MEDLINE search resulted in a limited sample of adherence studies. In addition, there is currently no standardized or fully validated method of measurement, allowing for variability in what was considered 'adherent'. Lastly, data collected via subjective methods cannot guarantee reliable results. Overall, the values reflected a population adherent to both topical and oral medications, with no significant difference in adherence between the two. However, the methodologies used by many of the studies were weak, and the findings are not consistent with results of more objective measures of adherence. The leading factors that contribute to poor adherence may be reduced with enhanced patient consultation, reminder systems, and education.
NASA Astrophysics Data System (ADS)
Wang, Yao; Vijaya Kumar, B. V. K.
2017-05-01
The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.
Pressure recovery performance of conical diffusers at high subsonic Mach numbers
NASA Technical Reports Server (NTRS)
Dolan, F. X.; Runstadler, P. W., Jr.
1973-01-01
The pressure recovery performance of conical diffusers has been measured for a wide range of geometries and inlet flow conditions. The approximate level and location (in terms of diffuser geometry of optimum performance were determined. Throat Mach numbers from low subsonic (m sub t equals 0.2) through choking (m sub t equals 1.0) were investigated in combination with throat blockage from 0.03 to 0.12. For fixed Mach number, performance was measured over a fourfold range of inlet Reynolds number. Maps of pressure recovery are presented as a function of diffuser geometry for fixed sets of inlet conditions. The influence of inlet blockage, throat Mach number, and inlet Reynolds number is discussed.
Altitudinal patterns of plant diversity on the Jade Dragon Snow Mountain, southwestern China.
Xu, Xiang; Zhang, Huayong; Tian, Wang; Zeng, Xiaoqiang; Huang, Hai
2016-01-01
Understanding altitudinal patterns of biological diversity and their underlying mechanisms is critically important for biodiversity conservation in mountainous regions. The contribution of area to plant diversity patterns is widely acknowledged and may mask the effects of other determinant factors. In this context, it is important to examine altitudinal patterns of corrected taxon richness by eliminating the area effect. Here we adopt two methods to correct observed taxon richness: a power-law relationship between richness and area, hereafter "method 1"; and richness counted in equal-area altitudinal bands, hereafter "method 2". We compare these two methods on the Jade Dragon Snow Mountain, which is the nearest large-scale altitudinal gradient to the Equator in the Northern Hemisphere. We find that seed plant species richness, genus richness, family richness, and species richness of trees, shrubs, herbs and Groups I-III (species with elevational range size <150, between 150 and 500, and >500 m, respectively) display distinct hump-shaped patterns along the equal-elevation altitudinal gradient. The corrected taxon richness based on method 2 (TRcor2) also shows hump-shaped patterns for all plant groups, while the one based on method 1 (TRcor1) does not. As for the abiotic factors influencing the patterns, mean annual temperature, mean annual precipitation, and mid-domain effect explain a larger part of the variation in TRcor2 than in TRcor1. In conclusion, for biodiversity patterns on the Jade Dragon Snow Mountain, method 2 preserves the significant influences of abiotic factors to the greatest degree while eliminating the area effect. Our results thus reveal that although the classical method 1 has earned more attention and approval in previous research, method 2 can perform better under certain circumstances. We not only confirm the essential contribution of method 1 in community ecology, but also highlight the significant role of method 2 in eliminating the area effect, and call for more application of method 2 in further macroecological studies.
ERIC Educational Resources Information Center
Burris, Christopher E.
1975-01-01
Wulff v. Singleton represents the first case in which a physician has been granted standing when his sole injury arose from the possibility that he might not be paid for performing abortions. It also represents the first time a physician, as opposed to his patient, has been held to be denied equal protection. The court's rationale is examined.…
41 CFR 60-2.11 - Organizational profile.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE PROGRAMS, EQUAL EMPLOYMENT OPPORTUNITY, DEPARTMENT OF LABOR... establishment. It is one method contractors use to determine whether barriers to equal employment opportunity... that may assist in identifying organizational units where women or minorities are underrepresented or...
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
Hasegawa, Hideo
2011-07-01
Responses of small open oscillator systems to applied external forces have been studied with the use of an exactly solvable classical Caldeira-Leggett model in which a harmonic oscillator (system) is coupled to finite N-body oscillators (bath) with an identical frequency (ω(n) = ω(o) for n = 1 to N). We have derived exact expressions for positions, momenta, and energy of the system in nonequilibrium states and for work performed by applied forces. A detailed study has been made on an analytical method for canonical averages of physical quantities over the initial equilibrium state, which is much superior to numerical averages commonly adopted in simulations of small systems. The calculated energy of the system which is strongly coupled to a finite bath is fluctuating but nondissipative. It has been shown that the Jarzynski equality is valid in nondissipative nonergodic open oscillator systems regardless of the rate of applied ramp force.
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Calculation of near optimum design of InP/In(0.53)Ga(0.47)As monolithic tandem solar cells
NASA Technical Reports Server (NTRS)
Renaud, P.; Vilela, M. F.; Freundlich, A.; Medelci, N.; Bensaoula, A.
1994-01-01
An analysis of InP/GaAs tandem solar cell structure has been undertaken to allow for maximum AMO conversion efficiencies (space applications) while still taking into account both the theoretical and technological limitations. The dependence of intrinsic and extrinsic parameters such as diffusion lengths and generation-recombination (GR) lifetimes on N/P and P/N devices performances are clearly demonstrated. We also report for the first time the improvement attainable through the use of a new patterned tunnel junction as the inter cell ohmic interconnect. Such a design minimizes the light absorption in the interconnect region and leads to a noticeable increase in the cell efficiency. Our computations predict 27 percent AMO efficiency for N/P tandems with ideality factor gamma = 2 (GR lifetimes approximately equal 1 micron), and 36 percent for gamma = 1 (GR lifetimes approximately equals 100 microns). The method of optimization and the values of the physical and optical parameters are discussed.
Improved thermal storage material for portable life support systems
NASA Technical Reports Server (NTRS)
Kellner, J. D.
1975-01-01
The availability of thermal storage materials that have heat absorption capabilities substantially greater than water-ice in the same temperature range would permit significant improvements in performance of projected portable thermal storage cooling systems. A method for providing increased heat absorption by the combined use of the heat of solution of certain salts and the heat of fusion of water-ice was investigated. This work has indicated that a 30 percent solution of potassium bifluoride (KHF2) in water can absorb approximately 52 percent more heat than an equal weight of water-ice, and approximately 79 percent more heat than an equal volume of water-ice. The thermal storage material can be regenerated easily by freezing, however, a lower temperature must be used, 261 K as compared to 273 K for water-ice. This work was conducted by the United Aircraft Research Laboratories as part of a program at Hamilton Standard Division of United Aircraft Corporation under contract to NASA Ames Research Center.
Scintillation analysis of truncated Bessel beams via numerical turbulence propagation simulation.
Eyyuboğlu, Halil T; Voelz, David; Xiao, Xifeng
2013-11-20
Scintillation aspects of truncated Bessel beams propagated through atmospheric turbulence are investigated using a numerical wave optics random phase screen simulation method. On-axis, aperture averaged scintillation and scintillation relative to a classical Gaussian beam of equal source power and scintillation per unit received power are evaluated. It is found that in almost all circumstances studied, the zeroth-order Bessel beam will deliver the lowest scintillation. Low aperture averaged scintillation levels are also observed for the fourth-order Bessel beam truncated by a narrower source window. When assessed relative to the scintillation of a Gaussian beam of equal source power, Bessel beams generally have less scintillation, particularly at small receiver aperture sizes and small beam orders. Upon including in this relative performance measure the criteria of per unit received power, this advantageous position of Bessel beams mostly disappears, but zeroth- and first-order Bessel beams continue to offer some advantage for relatively smaller aperture sizes, larger source powers, larger source plane dimensions, and intermediate propagation lengths.
Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007
Anisotropic surface acoustic waves in tungsten/lithium niobate phononic crystals
NASA Astrophysics Data System (ADS)
Sun, Jia-Hong; Yu, Yuan-Hai
2018-02-01
Phononic crystals (PnC) were known for acoustic band gaps for different acoustic waves. PnCs were already applied in surface acoustic wave (SAW) devices as reflective gratings based on the band gaps. In this paper, another important property of PnCs, the anisotropic propagation, was studied. PnCs made of circular tungsten films on a lithium niobate substrate were analyzed by finite element method. Dispersion curves and equal frequency contours of surface acoustic waves in PnCs of various dimensions were calculated to study the anisotropy. The non-circular equal frequency contours and negative refraction of group velocity were observed. Then PnC was applied as an acoustic lens based on the anisotropic propagation. Trajectory of SAW passing PnC lens was calculated and transmission of SAW was optimized by selecting proper layers of lens and applying tapered PnC. The result showed that PnC lens can suppress diffraction of surface waves effectively and improve the performance of SAW devices.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid
2013-08-12
The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.
Adaptive sigmoid function bihistogram equalization for image contrast enhancement
NASA Astrophysics Data System (ADS)
Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe
2015-09-01
Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.
Stability and Phase Noise Tests of Two Cryo-Cooled Sapphire Oscillators
NASA Technical Reports Server (NTRS)
Dick, G. John; Wang, Rabi T.
1998-01-01
A cryocooled Compensated Sapphire Oscillator (CSO), developed for the Cassini Ka-band Radio Science experiment, and operating in the 8K - 10K temperature range was previously demonstrated to show ultra-high stability of sigma(sub y) = 2.5 x 10 (exp -15) for measuring times 200 seconds less than or equal to tau less than or equal to 600 seconds using a hydrogen maser as reference. We present here test results for a second unit which allows CSO short-term stability and phase noise to be measured for the first time. Also included are design details of a new RF receiver and an intercomparison with the first CSO unit. Cryogenic oscillators operating below about 10K offer the highest possible short term stability of any frequency sources. However, their use has so far been restricted to research environments due to the limited operating periods associated with liquid helium consumption. The cryocooled CSO is being built in support of the Cassini Ka-band Radio Science experiment and is designed to operate continuously for periods of a year or more. Performance targets are a stability of 3-4 x 10 (exp -15) (1 second less than or equal to tau less than or equal to 100 seconds) and phase noise of -73dB/Hz @ 1Hz measured at 34 GHz. Installation in 5 stations of NASA's deep space network (DSN) is planned in the years 2000 - 2002. In the previous tests, actual stability of the CSO for measuring times tau less than or equal to 200 seconds could not be directly measured, being masked by short-term fluctuations of the H-maser reference. Excellent short-term performance, however, could be inferred by the success of an application of the CSO as local oscillator (L.O.) to the JPL LITS passive atomic standard, where medium-term stability showed no degradation due to L.O. instabilities at a level of (sigma)y = 3 x 10 (exp -14)/square root of tau. A second CSO has now been constructed, and all cryogenic aspects have been verified, including a resonator turn-over temperature of 7.907 K, and Q of 7.4 x 10 (exp 8). These values compare to a turn-over of 8.821 K and Q of 1.0 x 10 (exp 9) for the first resonator. Operation of this second unit provides a capability to directly verify for the first time the short-term (1 second less than or equal to tau less than or equal to 200 seconds) stability and the phase noise of the CSO units. The RF receiver used in earlier tests was sufficient to meet Cassini requirements for tau greater than or equal to 10 seconds but had short-term stability limited to 2-4 x 10 (exp -14) at tau = 1 second, a value 10 times too high to meet our requirements. A new low-noise receiver has been designed to provide approximately equal to 10-15 performance at 1 second, and one receiver is now operational, demonstrating again short-term CSO performance with H maser-limited stability. Short-term performance was degraded in the old receiver due to insufficient tuning bandwidth in a 100MHZ quartz VCO that was frequency-locked to the cryogenic sapphire resonator. The new receivers are designed for sufficient bandwidth, loop gain and low noise to achieve the required performance.
Snapp-Childs, Winona; Fath, Aaron J; Watson, Carol A; Flatters, Ian; Mon-Williams, Mark; Bingham, Geoffrey P
2015-10-01
Many children have difficulty producing movements well enough to improve in perceptuo-motor learning. We have developed a training method that supports active movement generation to allow improvement in a 3D tracing task requiring good compliance control. We previously tested 7-8 year old children who exhibited poor performance and performance differences before training. After training, performance was significantly improved and performance differences were eliminated. According to the Dynamic Systems Theory of development, appropriate support can enable younger children to acquire the ability to perform like older children. In the present study, we compared 7-8 and 10-12 year old school children and predicted that younger children would show reduced performance that was nonetheless amenable to training. Indeed, the pre-training performance of the 7-8 year olds was worse than that of the 10-12 year olds, but post-training performance was equally good for both groups. This was similar to previous results found using this training method for children with DCD and age-matched typically developing children. We also found in a previous study of 7-8 year old school children that training in the 3D tracing task transferred to a 2D drawing task. We now found similar transfer for the 10-12 year olds. Copyright © 2015 Elsevier B.V. All rights reserved.
Premanath, M.; Raghunath, M.
2010-01-01
Background: Peripheral Arterial Disease (PAD) remains the least recognized form of atherosclerosis. The Ankle-Brachial Index (ABI) has emerged as one of the potent markers of diffuse atherosclerosis, cardiovascular (CV) risk, and overall survival in general public, especially in diabetics. The important reason for the lack of early diagnosis is the non-availability of a test that is easy to perform and less expensive, with no training required. Objectives: To evaluate the osillometric method of performing ABI with regard to its usefulness in detecting PAD cases and to correlate the signs and symptoms with ABI. Materials and Methods: Two hundred diabetics of varying duration attending the clinic for a period of eight months, from August 2006 to April 2007, were evaluated for signs, symptoms, and risk factors. ABI was performed using the oscillometric method. The positives were confirmed by Doppler evaluation. An equal number of age- and sex-matched controls, which were ABI negative, were also assessed by Doppler. Sensitivity and Specificity were determined. Results: There were 120 males and 80 females. Twelve males (10%) and six females (7.5%) were ABI positive. On Doppler, eleven males (91.5%) and three females (50%) were true positives. There were six false negatives from the controls (three each). The Sensitivity was 70% and Specificity was 75%. Symptoms and signs correlated well with ABI positives. Hypertension was the most important risk factor. Conclusions: In spite of the limitations, the oscillometric method of performing ABI is a simple procedure, easy to perform, does not require training and can be performed as an outpatient procedure not only by doctors, but also by the paramedical staff to detect more PAD cases. PMID:20535314
77 FR 39117 - Equal Access to Justice Act Implementation Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-29
... regularly perform services for remuneration for the applicant, under the applicant's direction and control... Director may delegate authority to take final action on matters pertaining to the Equal Access to Justice... that the Director's final order issued pursuant to Sec. 1081.405 is final and unappealable, both within...
(abstract) Cryogenic Telescope Test Facility
NASA Technical Reports Server (NTRS)
Luchik, T. S.; Chave, R. G.; Nash, A. E.
1995-01-01
An optical test Dewar is being constructed with the unique capability to test mirrors of diameter less than or equal to 1 m, f less than or equal to 6, at temperatures from 300 to 4.2 K with a ZYGO Mark IV interferometer. The design and performance of this facility will be presented.
The nearest neighbor and the bayes error rates.
Loizou, G; Maybank, S J
1987-02-01
The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Unobtrusive Biometric System Based on Electroencephalogram Analysis
NASA Astrophysics Data System (ADS)
Riera, A.; Soria-Frisch, A.; Caparrini, M.; Grau, C.; Ruffini, G.
2007-12-01
Features extracted from electroencephalogram (EEG) recordings have proved to be unique enough between subjects for biometric applications. We show here that biometry based on these recordings offers a novel way to robustly authenticate or identify subjects. In this paper, we present a rapid and unobtrusive authentication method that only uses 2 frontal electrodes referenced to another one placed at the ear lobe. Moreover, the system makes use of a multistage fusion architecture, which demonstrates to improve the system performance. The performance analysis of the system presented in this paper stems from an experiment with 51 subjects and 36 intruders, where an equal error rate (EER) of 3.4% is obtained, that is, true acceptance rate (TAR) of 96.6% and a false acceptance rate (FAR) of 3.4%. The obtained performance measures improve the results of similar systems presented in earlier work.
The Flexible Fairness: Equality, Earned Entitlement, and Self-Interest
Gu, Ruolei; Broster, Lucas S.; Shen, Xueyi; Tian, Tengxiang; Luo, Yue-Jia; Krueger, Frank
2013-01-01
The current study explored whether earned entitlement modulated the perception of fairness in three experiments. A preliminary resource earning task was added before players decided how to allocate the resource they jointly earned. Participants’ decision in allocation, their responses to equal or unequal offers, whether advantageous or disadvantageous, and subjective ratings of fairness were all assessed in the current study. Behavioral results revealed that participants proposed more generous offers and showed enhanced tolerance to disadvantageous unequal offers from others when they performed worse than their presumed “partners,” while the reverse was true in the better-performance condition. The subjective ratings also indicated the effect of earned entitlement, such that worse performance was associated with higher perceived feelings of fairness for disadvantageous unequal offers, while better performance was associated with higher feelings of fairness for advantageous unequal offers. Equal offers were considered “fair” only when earned entitlement was even between two parties. In sum, the perception of fairness is modulated by an integration of egalitarian motivation and entitlement. In addition to justice principles, participants were also motivated by self-interest, such that participants placed more weight on entitlement in the better-performance condition than in the worse-performance condition. These results imply that earned entitlement is evaluated in a self-serving way. PMID:24039867
A Free Wake Numerical Simulation for Darrieus Vertical Axis Wind Turbine Performance Prediction
NASA Astrophysics Data System (ADS)
Belu, Radian
2010-11-01
In the last four decades, several aerodynamic prediction models have been formulated for the Darrieus wind turbine performances and characteristics. We can identified two families: stream-tube and vortex. The paper presents a simplified numerical techniques for simulating vertical axis wind turbine flow, based on the lifting line theory and a free vortex wake model, including dynamic stall effects for predicting the performances of a 3-D vertical axis wind turbine. A vortex model is used in which the wake is composed of trailing stream-wise and shedding span-wise vortices, whose strengths are equal to the change in the bound vortex strength as required by the Helmholz and Kelvin theorems. Performance parameters are computed by application of the Biot-Savart law along with the Kutta-Jukowski theorem and a semi-empirical stall model. We tested the developed model with an adaptation of the earlier multiple stream-tube performance prediction model for the Darrieus turbines. Predictions by using our method are shown to compare favorably with existing experimental data and the outputs of other numerical models. The method can predict accurately the local and global performances of a vertical axis wind turbine, and can be used in the design and optimization of wind turbines for built environment applications.
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987
Demultiplexing based on frequency-domain joint decision MMA for MDM system
NASA Astrophysics Data System (ADS)
Caili, Gong; Li, Li; Guijun, Hu
2016-06-01
In this paper, we propose a demultiplexing method based on frequency-domain joint decision multi-modulus algorithm (FD-JDMMA) for mode division multiplexing (MDM) system. The performance of FD-JDMMA is compared with frequency-domain multi-modulus algorithm (FD-MMA) and frequency-domain least mean square (FD-LMS) algorithm. The simulation results show that FD-JDMMA outperforms FD-MMA in terms of BER and convergence speed in the cases of mQAM (m=4, 16 and 64) formats. And it is also demonstrated that FD-JDMMA achieves better BER performance and converges faster than FD-LMS in the cases of 16QAM and 64QAM. Furthermore, FD-JDMMA maintains similar computational complexity as the both equalization algorithms.
Two-stage energy storage equalization system for lithium-ion battery pack
NASA Astrophysics Data System (ADS)
Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.
2017-11-01
How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.
Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift
NASA Astrophysics Data System (ADS)
Modarresi, N.; Rezakhah, S.
The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).
Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita
2018-03-01
The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.
Saturated Salt Solution Method: A Useful Cadaver Embalming for Surgical Skills Training
Hayashi, Shogo; Homma, Hiroshi; Naito, Munekazu; Oda, Jun; Nishiyama, Takahisa; Kawamoto, Atsuo; Kawata, Shinichi; Sato, Norio; Fukuhara, Tomomi; Taguchi, Hirokazu; Mashiko, Kazuki; Azuhata, Takeo; Ito, Masayuki; Kawai, Kentaro; Suzuki, Tomoya; Nishizawa, Yuji; Araki, Jun; Matsuno, Naoto; Shirai, Takayuki; Qu, Ning; Hatayama, Naoyuki; Hirai, Shuichi; Fukui, Hidekimi; Ohseto, Kiyoshige; Yukioka, Tetsuo; Itoh, Masahiro
2014-01-01
Abstract This article evaluates the suitability of cadavers embalmed by the saturated salt solution (SSS) method for surgical skills training (SST). SST courses using cadavers have been performed to advance a surgeon's techniques without any risk to patients. One important factor for improving SST is the suitability of specimens, which depends on the embalming method. In addition, the infectious risk and cost involved in using cadavers are problems that need to be solved. Six cadavers were embalmed by 3 methods: formalin solution, Thiel solution (TS), and SSS methods. Bacterial and fungal culture tests and measurement of ranges of motion were conducted for each cadaver. Fourteen surgeons evaluated the 3 embalming methods and 9 SST instructors (7 trauma surgeons and 2 orthopedists) operated the cadavers by 21 procedures. In addition, ultrasonography, central venous catheterization, and incision with cauterization followed by autosuture stapling were performed in some cadavers. The SSS method had a sufficient antibiotic effect and produced cadavers with flexible joints and a high tissue quality suitable for SST. The surgeons evaluated the cadavers embalmed by the SSS method to be highly equal to those embalmed by the TS method. Ultrasound images were clear in the cadavers embalmed by both the methods. Central venous catheterization could be performed in a cadaver embalmed by the SSS method and then be affirmed by x-ray. Lungs and intestines could be incised with cauterization and autosuture stapling in the cadavers embalmed by TS and SSS methods. Cadavers embalmed by the SSS method are sufficiently useful for SST. This method is simple, carries a low infectious risk, and is relatively of low cost, enabling a wider use of cadavers for SST. PMID:25501070
Saturated salt solution method: a useful cadaver embalming for surgical skills training.
Hayashi, Shogo; Homma, Hiroshi; Naito, Munekazu; Oda, Jun; Nishiyama, Takahisa; Kawamoto, Atsuo; Kawata, Shinichi; Sato, Norio; Fukuhara, Tomomi; Taguchi, Hirokazu; Mashiko, Kazuki; Azuhata, Takeo; Ito, Masayuki; Kawai, Kentaro; Suzuki, Tomoya; Nishizawa, Yuji; Araki, Jun; Matsuno, Naoto; Shirai, Takayuki; Qu, Ning; Hatayama, Naoyuki; Hirai, Shuichi; Fukui, Hidekimi; Ohseto, Kiyoshige; Yukioka, Tetsuo; Itoh, Masahiro
2014-12-01
This article evaluates the suitability of cadavers embalmed by the saturated salt solution (SSS) method for surgical skills training (SST). SST courses using cadavers have been performed to advance a surgeon's techniques without any risk to patients. One important factor for improving SST is the suitability of specimens, which depends on the embalming method. In addition, the infectious risk and cost involved in using cadavers are problems that need to be solved. Six cadavers were embalmed by 3 methods: formalin solution, Thiel solution (TS), and SSS methods. Bacterial and fungal culture tests and measurement of ranges of motion were conducted for each cadaver. Fourteen surgeons evaluated the 3 embalming methods and 9 SST instructors (7 trauma surgeons and 2 orthopedists) operated the cadavers by 21 procedures. In addition, ultrasonography, central venous catheterization, and incision with cauterization followed by autosuture stapling were performed in some cadavers. The SSS method had a sufficient antibiotic effect and produced cadavers with flexible joints and a high tissue quality suitable for SST. The surgeons evaluated the cadavers embalmed by the SSS method to be highly equal to those embalmed by the TS method. Ultrasound images were clear in the cadavers embalmed by both the methods. Central venous catheterization could be performed in a cadaver embalmed by the SSS method and then be affirmed by x-ray. Lungs and intestines could be incised with cauterization and autosuture stapling in the cadavers embalmed by TS and SSS methods. Cadavers embalmed by the SSS method are sufficiently useful for SST. This method is simple, carries a low infectious risk, and is relatively of low cost, enabling a wider use of cadavers for SST.
Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags
NASA Astrophysics Data System (ADS)
Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji
We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.
Improving performance of channel equalization in RSOA-based WDM-PON by QR decomposition.
Li, Xiang; Zhong, Wen-De; Alphones, Arokiaswami; Yu, Changyuan; Xu, Zhaowen
2015-10-19
In reflective semiconductor optical amplifier (RSOA)-based wavelength division multiplexed passive optical network (WDM-PON), the bit rate is limited by low modulation bandwidth of RSOAs. To overcome the limitation, we apply QR decomposition in channel equalizer (QR-CE) to achieve successive interference cancellation (SIC) for discrete Fourier transform spreading orthogonal frequency division multiplexing (DFT-S OFDM) signal. Using an RSOA with a 3-dB modulation bandwidth of only ~800 MHz, we experimentally demonstrate a 15.5-Gb/s over 20-km SSMF DFT-S OFDM transmission with QR-CE. The experimental results show that DFTS-OFDM with QR-CE attains much better BER performance than DFTS-OFDM and OFDM with conventional channel equalizers. The impacts of several parameters on QR-CE are investigated. It is found that 2 sub-bands in one OFDM symbol and 1 pilot in each sub-band are sufficient to achieve optimal performance and maintain the high spectral efficiency.
New public QSAR model for carcinogenicity
2010-01-01
Background One of the main goals of the new chemical regulation REACH (Registration, Evaluation and Authorization of Chemicals) is to fulfill the gaps in data concerned with properties of chemicals affecting the human health. (Q)SAR models are accepted as a suitable source of information. The EU funded CAESAR project aimed to develop models for prediction of 5 endpoints for regulatory purposes. Carcinogenicity is one of the endpoints under consideration. Results Models for prediction of carcinogenic potency according to specific requirements of Chemical regulation were developed. The dataset of 805 non-congeneric chemicals extracted from Carcinogenic Potency Database (CPDBAS) was used. Counter Propagation Artificial Neural Network (CP ANN) algorithm was implemented. In the article two alternative models for prediction carcinogenicity are described. The first model employed eight MDL descriptors (model A) and the second one twelve Dragon descriptors (model B). CAESAR's models have been assessed according to the OECD principles for the validation of QSAR. For the model validity we used a wide series of statistical checks. Models A and B yielded accuracy of training set (644 compounds) equal to 91% and 89% correspondingly; the accuracy of the test set (161 compounds) was 73% and 69%, while the specificity was 69% and 61%, respectively. Sensitivity in both cases was equal to 75%. The accuracy of the leave 20% out cross validation for the training set of models A and B was equal to 66% and 62% respectively. To verify if the models perform correctly on new compounds the external validation was carried out. The external test set was composed of 738 compounds. We obtained accuracy of external validation equal to 61.4% and 60.0%, sensitivity 64.0% and 61.8% and specificity equal to 58.9% and 58.4% respectively for models A and B. Conclusion Carcinogenicity is a particularly important endpoint and it is expected that QSAR models will not replace the human experts opinions and conventional methods. However, we believe that combination of several methods will provide useful support to the overall evaluation of carcinogenicity. In present paper models for classification of carcinogenic compounds using MDL and Dragon descriptors were developed. Models could be used to set priorities among chemicals for further testing. The models at the CAESAR site were implemented in java and are publicly accessible. PMID:20678182
Yeun, Eun Ja; Kwon, Hye Jin; Kim, Hyun Jeong
2012-06-01
This study was done to identify the awareness of gender equality among nursing college students, and to provide basic data for educational solutions and desirable directions. A Q-methodology which provides a method of analyzing the subjectivity of each item was used. 34 selected Q-statements from each of 20 women nursing college students were classified into a shape of normal distribution using 9-point scale. Subjectivity on the equality among genders was analyzed by the pc-QUANL program. Four types of awareness of gender equality in nursing college students were identified. The name for type I was 'pursuit of androgyny', for type II, 'difference-recognition', for type III, 'human-relationship emphasis', and for type IV, 'social-system emphasis'. The results of this study indicate that different approaches to educational programs on gender equality are recommended for nursing college students based on the four types of gender equality awareness.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Equality and quality in education. A comparative study of 19 countries.
Pfeffer, Fabian T
2015-05-01
This contribution assesses the performance of national education systems along two important dimensions: The degree to which they help individuals develop capabilities necessary for their successful social integration (educational quality) and the degree to which they confer equal opportunities for social advancement (educational equality). It advances a new conceptualization to measure quality and equality in education and then uses it to study the relationship between institutional differentiation and these outcomes. It relies on data on final educational credentials and literacy among adults that circumvent some of the under-appreciated conceptual challenges entailed in the widespread analysis of international student assessment data. The analyses reveal a positive relationship between educational quality and equality and show that education systems with a lower degree of institutional differentiation not only provide more educational equality but are also marked by higher levels of educational quality. While the latter association is partly driven by other institutional and macro-structural factors, I demonstrate that the higher levels of educational equality in less differentiated education systems do not entail an often-assumed trade-off for lower quality. Copyright © 2014 Elsevier Inc. All rights reserved.
Equality and Quality in Education. A Comparative Study of 19 Countries
Pfeffer, Fabian T.
2015-01-01
This contribution assesses the performance of national education systems along two important dimensions: The degree to which they help individuals develop capabilities necessary for their successful social integration (educational quality) and the degree to which they confer equal opportunities for social advancement (educational equality). It advances a new conceptualization to measure quality and equality in education and then uses it to study the relationship between institutional differentiation and these outcomes. It relies on data on final educational credentials and literacy among adults that circumvent some of the under-appreciated conceptual challenges entailed in the widespread analysis of international student assessment data. The analyses reveal a positive relationship between educational quality and equality and show that education systems with a lower degree of institutional differentiation not only provide more educational equality but are also marked by higher levels of educational quality. While the latter association is partly driven by other institutional and macro-structural factors, I demonstrate that the higher levels of educational equality in less differentiated education systems do not entail an often-assumed trade-off for lower quality. PMID:25769872
Caesarean Section--A Density-Equalizing Mapping Study to Depict Its Global Research Architecture.
Brüggmann, Dörthe; Löhlein, Lena-Katharina; Louwen, Frank; Quarcoo, David; Jaque, Jenny; Klingelhöfer, Doris; Groneberg, David A
2015-11-17
Caesarean section (CS) is a common surgical procedure. Although it has been performed in a modern context for about 100 years, there is no concise analysis of the international architecture of caesarean section research output available so far. Therefore, the present study characterizes the global pattern of the related publications by using the NewQIS (New Quality and Quantity Indices in Science) platform, which combines scientometric methods with density equalizing mapping algorithms. The Web of Science was used as a database. 12,608 publications were identified that originated from 131 countries. The leading nations concerning research activity, overall citations and country-specific h-Index were the USA and the United Kingdom. Relation of the research activity to epidemiologic data indicated that Scandinavian countries including Sweden and Finland were leading the field, whereas, in relation to economic data, countries such as Israel and Ireland led. Semi-qualitative indices such as country-specific citation rates ranked Sweden, Norway and Finland in the top positions. International caesarean section research output continues to grow annually in an era where caesarean section rates increased dramatically over the past decades. With regard to increasing employment of scientometric indicators in performance assessment, these findings should provide useful information for those tasked with the improvement of scientific achievements.
Caesarean Section—A Density-Equalizing Mapping Study to Depict Its Global Research Architecture
Brüggmann, Dörthe; Löhlein, Lena-Katharina; Louwen, Frank; Quarcoo, David; Jaque, Jenny; Klingelhöfer, Doris; Groneberg, David A.
2015-01-01
Caesarean section (CS) is a common surgical procedure. Although it has been performed in a modern context for about 100 years, there is no concise analysis of the international architecture of caesarean section research output available so far. Therefore, the present study characterizes the global pattern of the related publications by using the NewQIS (New Quality and Quantity Indices in Science) platform, which combines scientometric methods with density equalizing mapping algorithms. The Web of Science was used as a database. 12,608 publications were identified that originated from 131 countries. The leading nations concerning research activity, overall citations and country-specific h-Index were the USA and the United Kingdom. Relation of the research activity to epidemiologic data indicated that Scandinavian countries including Sweden and Finland were leading the field, whereas, in relation to economic data, countries such as Israel and Ireland led. Semi-qualitative indices such as country-specific citation rates ranked Sweden, Norway and Finland in the top positions. International caesarean section research output continues to grow annually in an era where caesarean section rates increased dramatically over the past decades. With regard to increasing employment of scientometric indicators in performance assessment, these findings should provide useful information for those tasked with the improvement of scientific achievements. PMID:26593932
Gupta, Madhu; Shri, Iti; Sakia, Prashant; Govil, Deepika
2015-01-01
Background and Aims: At equal minimum alveolar concentration (MAC), volatile agents may produce different bispectral index (BIS) values especially at low BIS levels when the effect is volatile agent specific. The present study was performed to compare the BIS values produced by sevoflurane and isoflurane at equal MAC and thereby assessing which is a better hypnotic agent. Methods: Sixty American Society of Anaesthesiologists I and II patients undergoing elective mastoidectomy were divided into groups receiving either isoflurane or sevoflurane, and at equi-MAC. BIS value was measured during both wash in and wash out phase, keeping other parameters same. Statistical analysis was performed using the Friedman two-way analysis and Mann-Whitney U-test. A P < 0.05 was considered significant. Results: BIS value was significantly lower with sevoflurane at all MAC values as compared to isoflurane, except in the beginning and at MAC awake. However, both the drugs proved to be cardiostable. Conclusion: At equi-MAC sevoflurane produces lower BIS values during wash in as well as wash out phase as compared to isoflurane, reflecting probably an agent specific effect and a deficiency in BIS algorithm for certain agents and their interplay. PMID:25788739
Efficient Computing Budget Allocation for Finding Simplest Good Designs
Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung
2012-01-01
In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404
Behavioral assessment of adaptive feedback equalization in a digital hearing aid.
French-St George, M; Wood, D J; Engebretson, A M
1993-01-01
An evaluation was made of the efficacy of a digital feedback equalization algorithm employed by the Central Institute for the Deaf Wearable Adaptive Digital Hearing Aid. Three questions were addressed: 1) Does acoustic feedback limit gain adjustments made by hearing aid users? 2) Does feedback equalization permit users with hearing-impairment to select more gain without feedback? and, 3) If more gain is used when feedback equalization is active, does word identification performance improve? Nine subjects with hearing impairment participated in the study. Results suggest that listeners with hearing impairment are indeed limited by acoustic feedback when listening to soft speech (55 dB A) in quiet. The average listener used an additional 4 dB gain when feedback equalization was active. This additional gain resulted in an average 10 rationalized arcsine units (RAU) improvement in word identification score.
Performance Analysis of Hybrid PON (WDM-TDM) with Equal and Unequal Channel Spacing
NASA Astrophysics Data System (ADS)
Sharma, Ramandeep; Dewra, Sanjeev; Rani, Aruna
2016-06-01
In this hybrid WDM-TDM PON has been evaluated and compared the downstream wavelengths with equal and unequal channel spacing at 5 Gbit/s per wavelength in the scenario of triple play services with 128 optical network units (ONUs). The triple play services: data, voice and video signals are transmitted up to 50 km distance having Q factor of 6.68 and BER of 3.64e-012 with unequal channel spacing and 45 km distance having Q factor of 6.33 and BER of 2.40e-011 with equal channel spacing in downstream direction. It has been observed that downstream wavelengths with unequal channel spacing provide better results than equal channel spacing.
NASA Astrophysics Data System (ADS)
Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.
Guilhem, Gaël; Cornu, Christophe; Guével, Arnaud
2012-01-01
Resistance exercise training commonly is performed against a constant external load (isotonic) or at a constant velocity (isokinetic). Researchers comparing the effectiveness of isotonic and isokinetic resistance-training protocols need to equalize the mechanical stimulus (work and velocity) applied. To examine whether the standardization protocol could be adjusted and applied to an eccentric training program. Controlled laboratory study. Controlled research laboratory. Twenty-one sport science male students (age = 20.6 ± 1.5 years, height = 178.0 ± 4.0 cm, mass = 74.5 ± 9.1 kg). Participants performed 9 weeks of isotonic (n = 11) or isokinetic (n = 10) eccentric training of knee extensors that was designed so they would perform the same amount of angular work at the same mean angular velocity. Angular work and angular velocity. The isotonic and isokinetic groups performed the same total amount of work (-185.2 ± 6.5 kJ and -184.4 ± 8.6 kJ, respectively) at the same angular velocity (21 ± 1°/s and 22°/s, respectively) with the same number of repetitions (8.0 and 8.0, respectively). Bland-Altman analysis showed that work (bias = 2.4%) and angular velocity (bias = 0.2%) were equalized over 9 weeks between the modes of training. The procedure developed allows angular work and velocity to be standardized over 9 weeks of isotonic and isokinetic eccentric training of the knee extensors. This method could be useful in future studies in which researchers compare neuromuscular adaptations induced by each type of training mode with respect to rehabilitating patients after musculoskeletal injury.
Special cascade LMS equalization scheme suitable for 60-GHz RoF transmission system.
Liu, Siming; Shen, Guansheng; Kou, Yanbin; Tian, Huiping
2016-05-16
We design a specific cascade least mean square (LMS) equalizer and to the best of our knowledge, it is the first time this kind of equalizer has been employed for 60-GHz millimeter-wave (mm-wave) radio over fiber (RoF) system. The proposed cascade LMS equalizer consists of two sub-equalizers which are designated for optical and wireless channel compensations, respectively. We control the linear and nonlinear factors originated from optical link and wireless link separately. The cascade equalization scheme can keep the nonlinear distortions of the RoF system in a low degree. We theoretically and experimentally investigate the parameters of the two sub-equalizers to reach their best performances. The experiment results show that the cascade equalization scheme has a faster convergence speed. It needs a training sequence with a length of 10000 to reach its stable status, which is only half as long as the traditional LMS equalizer needs. With the utility of a proposed equalizer, the 60-GHz RoF system can successfully transmit 5-Gbps BPSK signal over 10-km fiber and 1.2-m wireless link under forward error correction (FEC) limit 10-3. An improvement of 4dBm and 1dBm in power sensitivity at BER 10-3 over traditional LMS equalizer can be observed when the signals are transmitted through Back-to-Back (BTB) and 10-km fiber 1.2-m wireless links, respectively.
Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging.
Carasso, Alfred S; Vladár, András E
2014-01-01
This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by 'slow motion' low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected 'fast scan' frames. The paper includes software routines, written in Interactive Data Language (IDL),(1) that can perform the above image processing tasks.