Sample records for equalization method based

  1. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  2. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  3. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  4. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  5. A novel method of estimation of lipophilicity using distance-based topological indices: dominating role of equalized electronegativity.

    PubMed

    Agrawal, Vijay K; Gupta, Madhu; Singh, Jyoti; Khadikar, Padmakar V

    2005-03-15

    Attempt is made to propose yet another method of estimating lipophilicity of a heterogeneous set of 223 compounds. The method is based on the use of equalized electronegativity along with topological indices. It was observed that excellent results are obtained in multiparametric regression upon introduction of indicator parameters. The results are discussed critically on the basis various statistical parameters.

  6. New spatial diversity equalizer based on PLL

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.

  7. DSP+FPGA-based real-time histogram equalization system of infrared image

    NASA Astrophysics Data System (ADS)

    Gu, Dongsheng; Yang, Nansheng; Pi, Defu; Hua, Min; Shen, Xiaoyan; Zhang, Ruolan

    2001-10-01

    Histogram Modification is a simple but effective method to enhance an infrared image. There are several methods to equalize an infrared image's histogram due to the different characteristics of the different infrared images, such as the traditional HE (Histogram Equalization) method, and the improved HP (Histogram Projection) and PE (Plateau Equalization) method and so on. If to realize these methods in a single system, the system must have a mass of memory and extremely fast speed. In our system, we introduce a DSP + FPGA based real-time procession technology to do these things together. FPGA is used to realize the common part of these methods while DSP is to do the different part. The choice of methods and the parameter can be input by a keyboard or a computer. By this means, the function of the system is powerful while it is easy to operate and maintain. In this article, we give out the diagram of the system and the soft flow chart of the methods. And at the end of it, we give out the infrared image and its histogram before and after the process of HE method.

  8. Optimized and parallelized implementation of the electronegativity equalization method and the atom-bond electronegativity equalization method.

    PubMed

    Vareková, R Svobodová; Koca, J

    2006-02-01

    The most common way to calculate charge distribution in a molecule is ab initio quantum mechanics (QM). Some faster alternatives to QM have also been developed, the so-called "equalization methods" EEM and ABEEM, which are based on DFT. We have implemented and optimized the EEM and ABEEM methods and created the EEM SOLVER and ABEEM SOLVER programs. It has been found that the most time-consuming part of equalization methods is the reduction of the matrix belonging to the equation system generated by the method. Therefore, for both methods this part was replaced by the parallel algorithm WIRS and implemented within the PVM environment. The parallelized versions of the programs EEM SOLVER and ABEEM SOLVER showed promising results, especially on a single computer with several processors (compact PVM). The implemented programs are available through the Web page http://ncbr.chemi.muni.cz/~n19n/eem_abeem.

  9. Detroit's Fight for Equal Educational Opportunity.

    ERIC Educational Resources Information Center

    Zwerdling, A. L.

    To meet the challenge of equal educational opportunity, current methods of public school finance must be revised. The present financial system, based on State equalization of local property tax valuation, is inequitable since it results in many school districts, particularly those in large cities, having inadequate resources to meet extraordinary…

  10. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  11. Method of Forming Textured Silicon Substrate by Maskless Cryogenic Etching

    NASA Technical Reports Server (NTRS)

    Yee, Karl Y. (Inventor); Homyk, Andrew P. (Inventor)

    2014-01-01

    Disclosed herein is a textured substrate comprising a base comprising silicon, the base having a plurality of needle like structures depending away from the base, wherein at least one of the needle like structures has a depth of greater than or equal to about 50 micrometers determined perpendicular to the base, and wherein at least one of the needle like structures has a width of less than or equal to about 50 micrometers determined parallel to the base. An anode and a lithium ion battery comprising the textured substrate, and a method of producing the textured substrate are also disclosed.

  12. Daring to Marry: Marriage Equality Activism After Proposition 8 as Challenge to the Assimilationist/Radical Binary in Queer Studies.

    PubMed

    Weber, Shannon

    2015-01-01

    I analyze three case studies of marriage equality activism and marriage equality-based groups after the passage of Proposition 8 in California. Evaluating the JoinTheImpact protests of 2008, the LGBTQ rights group GetEQUAL, and the group One Struggle One Fight, I argue that these groups revise queer theoretical arguments about marriage equality activism as by definition assimilationist, homonormative, and single-issue. In contrast to such claims, the cases studied here provide a snapshot of heterogeneous, intersectional, and coalition-based social justice work in which creative methods of protest, including direct action and flash mobs, are deployed in militant ways for marriage rights and beyond.

  13. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  14. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  15. Investigation of the equality constraint effect on the reduction of the rotational ambiguity in three-component system using a novel grid search method.

    PubMed

    Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid

    2013-08-12

    The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Successive equimarginal approach for optimal design of a pump and treat system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.

    2007-08-01

    An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.

  17. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  18. Wireless autonomous device data transmission

    NASA Technical Reports Server (NTRS)

    Sammel, Jr., David W. (Inventor); Mickle, Marlin H. (Inventor); Cain, James T. (Inventor); Mi, Minhong (Inventor)

    2013-01-01

    A method of communicating information from a wireless autonomous device (WAD) to a base station. The WAD has a data element having a predetermined profile having a total number of sequenced possible data element combinations. The method includes receiving at the WAD an RF profile transmitted by the base station that includes a triggering portion having a number of pulses, wherein the number is at least equal to the total number of possible data element combinations. The method further includes keeping a count of received pulses and wirelessly transmitting a piece of data, preferably one bit, to the base station when the count reaches a value equal to the stored data element's particular number in the sequence. Finally, the method includes receiving the piece of data at the base station and using the receipt thereof to determine which of the possible data element combinations the stored data element is.

  19. Knowledge into learning: comparing lecture, e-learning and self-study take-home packet instructional methodologies with nurses.

    PubMed

    Soper, Tracey

    2017-04-01

    The aim of this quantitative experimental study was to examine which of three instructional methodologies of traditional lecture, online electronic learning (e-learning) and self-study take-home packets are effective in knowledge acquisition of professional registered nurses. A true experimental design was conducted to contrast the knowledge acquisition of 87 registered nurses randomly selected. A 40-item Acute Coronary Syndrome (ACS) true/false test was used to measure knowledge acquisition. Based on 0.05 significance level, the ANOVA test revealed that there was no difference in knowledge acquisition by registered nurses based on which of three learning instructional method they were assigned. It can be concluded that while all of these instructional methods were equally effective in knowledge acquisition, these methods may not be equally cost- and time-effective. The study was able to determine that there were no significant differences in knowledge acquisition of nurses between the three instructional methodologies. The study also found that all groups scored at the acceptable level for certification. It can be concluded that all of these instructional methods were equally effective in knowledge acquisition but are not equally cost- and time-effective. Therefore, hospital educators may wish to formulate policies regarding choice of instructional method that take into account the efficient use of nurses' time and institutional resources.

  20. Differential phase-shift keying and channel equalization in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Wan, Xiongfeng; Xu, Chenlu

    2018-01-01

    We present the performance benefits of differential phase-shift keying (DPSK) modulation in eliminating influence from atmospheric turbulence, especially for coherent free space optical (FSO) communication with a high communication rate. Analytic expression of detected signal is derived, based on which, homodyne detection efficiency is calculated to indicate the performance of wavefront compensation. Considered laser pulses always suffer from atmospheric scattering effect by clouds, intersymbol interference (ISI) in high-speed FSO communication link is analyzed. Correspondingly, the channel equalization method of a binormalized modified constant modulus algorithm based on set-membership filtering (SM-BNMCMA) is proposed to solve the ISI problem. Finally, through the comparison with existing channel equalization methods, its performance benefits of both ISI elimination and convergence speed are verified. The research findings have theoretical significance in a high-speed FSO communication system.

  1. Equal Employment Legislation: Alternative Means of Compliance.

    ERIC Educational Resources Information Center

    Daum, Jeffrey W.

    Alternative means of compliance available to organizations to bring their manpower uses into line with existing equal employment legislation are discussed in this paper. The first area addressed concerns the classical approach to selection and placement based on testing methods. The second area discussed reviews various nontesting techniques, such…

  2. Pathways of equality through education: impact of gender (in)equality and maternal education on exclusive breastfeeding among natives and migrants in Belgium.

    PubMed

    Vanderlinden, Karen; Van de Putte, Bart

    2017-04-01

    Even though breastfeeding is typically considered the preferred feeding method for infants worldwide, in Belgium, breastfeeding rates remain low across native and migrant groups while the underlying determinants are unclear. Furthermore, research examining contextual effects, especially regarding gender (in)equality and ideology, has not been conducted. We hypothesized that greater gender equality scores in the country of origin will result in higher breastfeeding chances. Because gender equality does not operate only at the contextual level but can be mediated through individual level resources, we hypothesized the following for maternal education: higher maternal education will be an important positive predictor for exclusive breastfeeding chances in Belgium, but its effects will differ over subsequent origin countries. Based on IKAROS data (GeÏntegreerd Kind Activiteiten en Regio Ondersteunings Systeem), we perform multilevel analyses on 27 936 newborns. Feeding method is indicated by exclusive breastfeeding 3 months after childbirth. We measure gender (in)equality using Global Gender Gap scores from the mother's origin country. Maternal education is a metric variable based on International Standard Classification of Education indicators. Results show that 3.6% of the variation in breastfeeding can be explained by differences between the migrant mother's country of origin. However, the effect of gender (in)equality appears to be non-significant. After adding maternal education, the effect for origin countries scoring low on gender equality turns significant. Maternal education on its own shows strong positive association with exclusive breastfeeding and, furthermore, has different effects for different origin countries. Possible explanations are discussed in-depth setting direction for further research regarding the different pathways gender (in)equality and maternal education affect breastfeeding. © 2016 John Wiley & Sons Ltd. © 2016 John Wiley & Sons Ltd.

  3. Hue-preserving and saturation-improved color histogram equalization algorithm.

    PubMed

    Song, Ki Sun; Kang, Hee; Kang, Moon Gi

    2016-06-01

    In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.

  4. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  5. Dithiothreitol-based protein equalization technology to unravel biomarkers for bladder cancer.

    PubMed

    Araújo, J E; López-Fernández, H; Diniz, M S; Baltazar, Pedro M; Pinheiro, Luís Campos; da Silva, Fernando Calais; Carrascal, Mylène; Videira, Paula; Santos, H M; Capelo, J L

    2018-04-01

    This study aimed to assess the benefits of dithiothreitol (DTT)-based sample treatment for protein equalization to assess potential biomarkers for bladder cancer. The proteome of plasma samples of patients with bladder carcinoma, patients with lower urinary tract symptoms (LUTS) and healthy volunteers, was equalized with dithiothreitol (DTT) and compared. The equalized proteomes were interrogated using two-dimensional gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry. Six proteins, namely serum albumin, gelsolin, fibrinogen gamma chain, Ig alpha-1 chain C region, Ig alpha-2 chain C region and haptoglobin, were found dysregulated in at least 70% of bladder cancer patients when compared with a pool of healthy individuals. One protein, serum albumin, was found overexpressed in 70% of the patients when the equalized proteome of the healthy pool was compared with the equalized proteome of the LUTS patients. The pathways modified by the proteins differentially expressed were analyzed using Cytoscape. The method here presented is fast, cheap, of easy application and it matches the analytical minimalism rules as outlined by Halls. Orthogonal validation was done using western-blot. Overall, DTT-based protein equalization is a promising methodology in bladder cancer research. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Dictionary-learning-based reconstruction method for electron tomography.

    PubMed

    Liu, Baodong; Yu, Hengyong; Verbridge, Scott S; Sun, Lizhi; Wang, Ge

    2014-01-01

    Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.

  7. Device and Method for Continuously Equalizing the Charge State of Lithium Ion Battery Cells

    NASA Technical Reports Server (NTRS)

    Schwartz, Paul D. (Inventor); Roufberg, Lewis M. (Inventor); Martin, Mark N. (Inventor)

    2015-01-01

    A method of equalizing charge states of individual cells in a battery includes measuring a previous cell voltage for each cell, measuring a previous shunt current for each cell, calculating, based on the previous cell voltage and the previous shunt current, an adjusted cell voltage for each cell, determining a lowest adjusted cell voltage from among the calculated adjusted cell voltages, and calculating a new shunt current for each cell.

  8. Direct handling of equality constraints in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Renaud, John E.; Gabriele, Gary A.

    1990-01-01

    In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.

  9. Complementing Gender Analysis Methods.

    PubMed

    Kumar, Anant

    2016-01-01

    The existing gender analysis frameworks start with a premise that men and women are equal and should be treated equally. These frameworks give emphasis on equal distribution of resources between men and women and believe that this will bring equality which is not always true. Despite equal distribution of resources, women tend to suffer and experience discrimination in many areas of their lives such as the power to control resources within social relationships, and the need for emotional security and reproductive rights within interpersonal relationships. These frameworks believe that patriarchy as an institution plays an important role in women's oppression, exploitation, and it is a barrier in their empowerment and rights. Thus, some think that by ensuring equal distribution of resources and empowering women economically, institutions like patriarchy can be challenged. These frameworks are based on proposed equality principle which puts men and women in competing roles. Thus, the real equality will never be achieved. Contrary to the existing gender analysis frameworks, the Complementing Gender Analysis framework proposed by the author provides a new approach toward gender analysis which not only recognizes the role of economic empowerment and equal distribution of resources but suggests to incorporate the concept and role of social capital, equity, and doing gender in gender analysis which is based on perceived equity principle, putting men and women in complementing roles that may lead to equality. In this article the author reviews the mainstream gender theories in development from the viewpoint of the complementary roles of gender. This alternative view is argued based on existing literature and an anecdote of observations made by the author. While criticizing the equality theory, the author offers equity theory in resolving the gender conflict by using the concept of social and psychological capital.

  10. [A new method of evaluating the utilization of nutrients (carbohydrates, amino acids and fatty acids) on the plastic and energy goals in the animal body].

    PubMed

    Virovets, O A; Gapparov, M M

    1998-01-01

    With use of a new method, based on detection in blood serum of radioactivity of water, formed from tritium marked precursors--glucose, amino acids (valine, serine, histidine) and palmitine acid--their distribution on oxidizing and anabolic ways of metabolism was determined. The work was carried out on laboratory rats. In young pubertal rats the ratio of flows on these ways for glucose was found equal 2.83, i.e. it in a greater degree was used as energy substratum. On the contrary, for palmitine acid this ratio was equal 0.10--it was comprised in a plastic material of organism in a greater degree. For serine, histidine and valine it is equal 0.34, 0.71 and 0.46, accordingly. In growing rats the distribution of flows was shifted aside of anabolic way: the ratio of flows is equal 0.19; in old rats--aside of oxidizing: a ratio of flows is equal 0.71.

  11. Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.

    PubMed

    Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N

    2012-01-16

    We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.

  12. Yager’s ranking method for solving the trapezoidal fuzzy number linear programming

    NASA Astrophysics Data System (ADS)

    Karyati; Wutsqa, D. U.; Insani, N.

    2018-03-01

    In the previous research, the authors have studied the fuzzy simplex method for trapezoidal fuzzy number linear programming based on the Maleki’s ranking function. We have found some theories related to the term conditions for the optimum solution of fuzzy simplex method, the fuzzy Big-M method, the fuzzy two-phase method, and the sensitivity analysis. In this research, we study about the fuzzy simplex method based on the other ranking function. It is called Yager's ranking function. In this case, we investigate the optimum term conditions. Based on the result of research, it is found that Yager’s ranking function is not like Maleki’s ranking function. Using the Yager’s function, the simplex method cannot work as well as when using the Maleki’s function. By using the Yager’s function, the value of the subtraction of two equal fuzzy numbers is not equal to zero. This condition makes the optimum table of the fuzzy simplex table is undetected. As a result, the simplified fuzzy simplex table becomes stopped and does not reach the optimum solution.

  13. A comparison of the simplified olecranon and digital methods of assessment of skeletal maturity during the pubertal growth spurt.

    PubMed

    Canavese, F; Charles, Y P; Dimeglio, A; Schuller, S; Rousset, M; Samba, A; Pereira, B; Steib, J-P

    2014-11-01

    Assessment of skeletal age is important in children's orthopaedics. We compared two simplified methods used in the assessment of skeletal age. Both methods have been described previously with one based on the appearance of the epiphysis at the olecranon and the other on the digital epiphyses. We also investigated the influence of assessor experience on applying these two methods. Our investigation was based on the anteroposterior left hand and lateral elbow radiographs of 44 boys (mean: 14.4; 12.4 to 16.1 ) and 78 girls (mean: 13.0; 11.1 to14.9) obtained during the pubertal growth spurt. A total of nine observers examined the radiographs with the observers assigned to three groups based on their experience (experienced, intermediate and novice). These raters were required to determined skeletal ages twice at six-week intervals. The correlation between the two methods was determined per assessment and per observer groups. Interclass correlation coefficients (ICC) evaluated the reproducibility of the two methods. The overall correlation between the two methods was r = 0.83 for boys and r = 0.84 for girls. The correlation was equal between first and second assessment, and between the observer groups (r ≥ 0.82). There was an equally strong ICC for the assessment effect (ICC ≤ 0.4%) and observer effect (ICC ≤ 3%) for each method. There was no significant (p < 0.05) difference between the levels of experience. The two methods are equally reliable in assessing skeletal maturity. The olecranon method offers detailed information during the pubertal growth spurt, while the digital method is as accurate but less detailed, making it more useful after the pubertal growth spurt once the olecranon has ossified. ©2014 The British Editorial Society of Bone & Joint Surgery.

  14. Binarization of apodizers by adapted one-dimensional error diffusion method

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Marek; Cichocki, Tomasz; Martinez-Corral, Manuel; Andres, Pedro

    1994-10-01

    Two novel algorithms for the binarization of continuous rotationally symmetric real positive pupil filters are presented. Both algorithms are based on 1-D error diffusion concept. The original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the pupils with equal width zones give Fraunhofer diffraction pattern more similar to that of the original continuous-tone pupil than those with equal area zones, assuming in both cases the same resolution limit of printing device.

  15. Automatic latency equalization in VHDL-implemented complex pipelined systems

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.

    2016-09-01

    In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.

  16. A technique for locating function roots and for satisfying equality constraints in optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.

  17. A technique for locating function roots and for satisfying equality constraints in optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1992-01-01

    A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.

  18. Can the impact of gender equality on health be measured? a cross-sectional study comparing measures based on register data with individual survey-based data

    PubMed Central

    2012-01-01

    Background The aim of this study was to investigate potential associations between gender equality at work and self-rated health. Methods 2861 employees in 21 companies were invited to participate in a survey. The mean response rate was 49.2%. The questionnaire contained 65 questions, mainly on gender equality and health. Two logistic regression analyses were conducted to assess associations between (i) self-rated health and a register-based company gender equality index (OGGI), and (ii) self-rated health and self-rated gender equality at work. Results Even though no association was found between the OGGI and health, women who rated their company as “completely equal” or “quite equal” had higher odds of reporting “good health” compared to women who perceived their company as “not equal” (OR = 2.8, 95% confidence interval = 1.4 – 5.5 and OR = 2.73, 95% CI = 1.6-4.6). Although not statistically significant, we observed the same trends in men. The results were adjusted for age, highest education level, income, full or part-time employment, and type of company based on the OGGI. Conclusions No association was found between gender equality in companies, measured by register-based index (OGGI), and health. However, perceived gender equality at work positively affected women’s self-rated health but not men’s. Further investigations are necessary to determine whether the results are fully credible given the contemporary health patterns and positions in the labour market of women and men or whether the results are driven by selection patterns. PMID:22985388

  19. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  20. A Summary Score for the Framingham Heart Study Neuropsychological Battery

    PubMed Central

    Downer, Brian; Fardo, David W.; Schmitt, Frederick A.

    2015-01-01

    Objective To calculate three summary scores of the Framingham Heart Study neuropsychological battery and determine which score best differentiates between subjects classified as having normal cognition, test-based impaired learning and memory, test-based multidomain impairment, and dementia. Method The final sample included 2,503 participants. Three summary scores were assessed: (a) composite score that provided equal weight to each subtest, (b) composite score that provided equal weight to each cognitive domain assessed by the neuropsychological battery, and (c) abbreviated score comprised of subtests for learning and memory. Receiver operating characteristic analysis was used to determine which summary score best differentiated between the four cognitive states. Results The summary score that provided equal weight to each subtest best differentiated between the four cognitive states. Discussion A summary score that provides equal weight to each subtest is an efficient way to utilize all of the cognitive data collected by a neuropsychological battery. PMID:25804903

  1. Can Exosomes Induced by Breast Involution Be Markers for the Poor Prognosis and Prevention of Postpartum Breast Cancer?

    DTIC Science & Technology

    2014-07-01

    in rodents versus humans and whether the same isolation technique will yield exosomes that are equally useful in subsequent functional assays...exosome isolation by PEG based precipitation method, equal amounts of plasma and PEG 6000 were incubated overnight at 4°C on a rotating shaker followed by...Blotting: Equal amounts of proteins from Exosome samples (20 μg) were loaded onto 10% Tris gels in. Gels were run at an initial voltage of 60V thought eh

  2. Learning Rate Updating Methods Applied to Adaptive Fuzzy Equalizers for Broadband Power Line Communications

    NASA Astrophysics Data System (ADS)

    Ribeiro, Moisés V.

    2004-12-01

    This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL) communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI) effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.

  3. Developing new scenarios for water allocation negotiations: a case study of the Euphrates River Basin

    NASA Astrophysics Data System (ADS)

    Jarkeh, Mohammad Reza; Mianabadi, Ameneh; Mianabadi, Hojjat

    2016-10-01

    Mismanagement and uneven distribution of water may lead to or increase conflict among countries. Allocation of water among trans-boundary river neighbours is a key issue in utilization of shared water resources. The bankruptcy theory is a cooperative Game Theory method which is used when the amount of demand of riparian states is larger than total available water. In this study, we survey the application of seven methods of Classical Bankruptcy Rules (CBRs) including Proportional (CBR-PRO), Adjusted Proportional (CBR-AP), Constrained Equal Awards (CBR-CEA), Constrained Equal Losses (CBR-CEL), Piniles (CBR-Piniles), Minimal Overlap (CBR-MO), Talmud (CBR-Talmud) and four Sequential Sharing Rules (SSRs) including Proportional (SSR-PRO), Constrained Equal Awards (SSR-CEA), Constrained Equal Losses (SSR-CEL) and Talmud (SSR-Talmud) methods in allocation of the Euphrates River among three riparian countries: Turkey, Syria and Iraq. However, there is not a certain documented method to find more equitable allocation rule. Therefore, in this paper, a new method is established for choosing the most appropriate allocating rule which seems to be more equitable than other allocation rules to satisfy the stakeholders. The results reveal that, based on the new propose model, the CBR-AP seems to be more equitable to allocate the Euphrates River water among Turkey, Syria and Iraq.

  4. Pressure-equalizing PV assembly and method

    DOEpatents

    Dinwoodie, Thomas L.

    2004-10-26

    Each PV assembly of an array of PV assemblies comprises a base, a PV module and a support assembly securing the PV module to a position overlying the upper surface of the base. Vents are formed through the base. A pressure equalization path extends from the outer surface of the PV module, past the PV module, to and through at least one of the vents, and to the lower surface of the base to help reduce wind uplift forces on the PV assembly. The PV assemblies may be interengaged, such as by interengaging the bases of adjacent PV assemblies. The base may include a main portion and a cover and the bases of adjacent PV assemblies may be interengaged by securing the covers of adjacent bases together.

  5. Multipurpose contrast enhancement on epiphyseal plates and ossification centers for bone age assessment

    PubMed Central

    2013-01-01

    Background The high variations of background luminance, low contrast and excessively enhanced contrast of hand bone radiograph often impede the bone age assessment rating system in evaluating the degree of epiphyseal plates and ossification centers development. The Global Histogram equalization (GHE) has been the most frequently adopted image contrast enhancement technique but the performance is not satisfying. A brightness and detail preserving histogram equalization method with good contrast enhancement effect has been a goal of much recent research in histogram equalization. Nevertheless, producing a well-balanced histogram equalized radiograph in terms of its brightness preservation, detail preservation and contrast enhancement is deemed to be a daunting task. Method In this paper, we propose a novel framework of histogram equalization with the aim of taking several desirable properties into account, namely the Multipurpose Beta Optimized Bi-Histogram Equalization (MBOBHE). This method performs the histogram optimization separately in both sub-histograms after the segmentation of histogram using an optimized separating point determined based on the regularization function constituted by three components. The result is then assessed by the qualitative and quantitative analysis to evaluate the essential aspects of histogram equalized image using a total of 160 hand radiographs that are implemented in testing and analyses which are acquired from hand bone online database. Result From the qualitative analysis, we found that basic bi-histogram equalizations are not capable of displaying the small features in image due to incorrect selection of separating point by focusing on only certain metric without considering the contrast enhancement and detail preservation. From the quantitative analysis, we found that MBOBHE correlates well with human visual perception, and this improvement shortens the evaluation time taken by inspector in assessing the bone age. Conclusions The proposed MBOBHE outperforms other existing methods regarding comprehensive performance of histogram equalization. All the features which are pertinent to bone age assessment are more protruding relative to other methods; this has shorten the required evaluation time in manual bone age assessment using TW method. While the accuracy remains unaffected or slightly better than using unprocessed original image. The holistic properties in terms of brightness preservation, detail preservation and contrast enhancement are simultaneous taken into consideration and thus the visual effect is contributive to manual inspection. PMID:23565999

  6. Fabrication of photonic crystal microprisms based on artificial opals

    NASA Astrophysics Data System (ADS)

    Fenollosa, Roberto; Ibisate, Marta; Rubio, Silvia; Lopez, Ceferino; Meseguer, Francisco; Sanchez-Dehesa, Jose

    2002-04-01

    This paper reports a new method for faceting artificial opals based on micromanipulation techniques. By this means it was possible to fabricate an opal prism in a single domain with different faces: (111), (110) and (100), which were characterized by Scanning Electron Microscopy and Optical Reflectance Spectroscopy. Their spectra exhibit different characteristics depending on the orientation of the facet. While (111)-oriented face gives rise to a high Bragg reflection peak at about a/(lambda) equals 0.66 (where a is the lattice parameter), (110) and (100) faces show much less intense peaks corresponding to features in the band structure at a/(lambda) equals 1.12 and a/(lambda) equals 1.07 respectively. Peaks at higher energies have less obvious explanation.

  7. Joint polarization tracking and channel equalization based on radius-directed linear Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qun; Yang, Yanfu; Zhong, Kangping; Liu, Jie; Wu, Xiong; Yao, Yong

    2018-01-01

    We propose a joint polarization tracking and channel equalization scheme based on radius-directed linear Kalman filter (RD-LKF) by introducing the butterfly finite-impulse-response (FIR) filter in our previously proposed RD-LKF method. Along with the fast polarization tracking, it can also simultaneously compensate the inter-symbol interference (ISI) effects including residual chromatic dispersion and polarization mode dispersion. Compared with the conventional radius-directed equalizer (RDE) algorithm, it is demonstrated experimentally that three times faster convergence speed, one order of magnitude better tracking capability, and better BER performance is obtained in polarization division multiplexing 16 quadrature amplitude modulation system. Besides, the influences of the algorithm parameters on the convergence and the tracking performance are investigated by numerical simulation.

  8. [Subjectivity of nursing college students' awareness of gender equality: an application of Q-methodology].

    PubMed

    Yeun, Eun Ja; Kwon, Hye Jin; Kim, Hyun Jeong

    2012-06-01

    This study was done to identify the awareness of gender equality among nursing college students, and to provide basic data for educational solutions and desirable directions. A Q-methodology which provides a method of analyzing the subjectivity of each item was used. 34 selected Q-statements from each of 20 women nursing college students were classified into a shape of normal distribution using 9-point scale. Subjectivity on the equality among genders was analyzed by the pc-QUANL program. Four types of awareness of gender equality in nursing college students were identified. The name for type I was 'pursuit of androgyny', for type II, 'difference-recognition', for type III, 'human-relationship emphasis', and for type IV, 'social-system emphasis'. The results of this study indicate that different approaches to educational programs on gender equality are recommended for nursing college students based on the four types of gender equality awareness.

  9. Aspects of Equality in Mandatory Partnerships – From the Perspective of Municipal Care in Norway

    PubMed Central

    Ljunggren, Birgitte

    2016-01-01

    Introduction: This paper raises questions about equality in partnerships, since imbalance in partnerships may effect collaboration outcomes in integrated care. We address aspects of equality in mandatory, public-public partnerships, from the perspective of municipal care. We have developed a questionnaire wherein the Norwegian Coordination Reform is an illustrative example. The following research question is addressed: What equality dimensions are important for municipals related to mandatory partnerships with hospitals? Theory/methods: Since we did not find any instrument to measure equality in partnerships, an explorative design was chosen. The development of the instrument was based on the theory on partnership and knowledge about the field and context. A national online survey was emitted to all 429 Norwegian municipalities in 2013. The response rate was in total 58 percent (n = 248). The data were mainly analysed using Principal component analysis. Results: It seems that the two dimensions “learning and expertise equality” and “contractual equality” collects reliable and valid data to measure aspects of equality in partnerships. Discussion: Partnerships are usually based on voluntarism. The results indicate that mandatory partnerships, within a public health care system, can be appropriate to equalize partnerships between health care providers at different care levels. PMID:27616962

  10. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  11. Two-stage energy storage equalization system for lithium-ion battery pack

    NASA Astrophysics Data System (ADS)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  12. Comparison of image enhancement methods for the effective diagnosis in successive whole-body bone scans.

    PubMed

    Jeong, Chang Bu; Kim, Kwang Gi; Kim, Tae Sung; Kim, Seok Ki

    2011-06-01

    Whole-body bone scan is one of the most frequent diagnostic procedures in nuclear medicine. Especially, it plays a significant role in important procedures such as the diagnosis of osseous metastasis and evaluation of osseous tumor response to chemotherapy and radiation therapy. It can also be used to monitor the possibility of any recurrence of the tumor. However, it is a very time-consuming effort for radiologists to quantify subtle interval changes between successive whole-body bone scans because of many variations such as intensity, geometry, and morphology. In this paper, we present the most effective method of image enhancement based on histograms, which may assist radiologists in interpreting successive whole-body bone scans effectively. Forty-eight successive whole-body bone scans from 10 patients were obtained and evaluated using six methods of image enhancement based on histograms: histogram equalization, brightness-preserving bi-histogram equalization, contrast-limited adaptive histogram equalization, end-in search, histogram matching, and exact histogram matching (EHM). Comparison of the results of the different methods was made using three similarity measures peak signal-to-noise ratio, histogram intersection, and structural similarity. Image enhancement of successive bone scans using EHM showed the best results out of the six methods measured for all similarity measures. EHM is the best method of image enhancement based on histograms for diagnosing successive whole-body bone scans. The method for successive whole-body bone scans has the potential to greatly assist radiologists quantify interval changes more accurately and quickly by compensating for the variable nature of intensity information. Consequently, it can improve radiologists' diagnostic accuracy as well as reduce reading time for detecting interval changes.

  13. Pressure equalizing photovoltaic assembly and method

    DOEpatents

    Dinwoodie, Thomas L [Piedmont, CA

    2003-05-27

    Each PV assembly of an array of PV assemblies comprises a base, a PV module and a support assembly securing the PV module to a position overlying the upper surface of the base. Vents are formed through the base. A pressure equalization path extends from the outer surface of the PV module, past the peripheral edge of the PV module, to and through at least one of the vents, and to the lower surface of the base to help reduce wind uplift forces on the PV assembly. The PV assemblies may be interengaged, such as by interengaging the bases of adjacent PV assemblies. The base may include a main portion and a cover and the bases of adjacent PV assemblies may be interengaged by securing the covers of adjacent bases together.

  14. Chest CT window settings with multiscale adaptive histogram equalization: pilot study.

    PubMed

    Fayad, Laura M; Jin, Yinpeng; Laine, Andrew F; Berkmen, Yahya M; Pearson, Gregory D; Freedman, Benjamin; Van Heertum, Ronald

    2002-06-01

    Multiscale adaptive histogram equalization (MAHE), a wavelet-based algorithm, was investigated as a method of automatic simultaneous display of the full dynamic contrast range of a computed tomographic image. Interpretation times were significantly lower for MAHE-enhanced images compared with those for conventionally displayed images. Diagnostic accuracy, however, was insufficient in this pilot study to allow recommendation of MAHE as a replacement for conventional window display.

  15. 12 CFR 1261.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... includes guaranteed directorships and stock directorships. Method of equal proportions means the mathematical formula used by FHFA to allocate member directorships among the States in a Bank's district based...

  16. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  17. Epidemiologic research using probabilistic outcome definitions.

    PubMed

    Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S

    2015-01-01

    Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Design and experimental study on Fresnel lens of the combination of equal-width and equal-height of grooves

    NASA Astrophysics Data System (ADS)

    Guo, Limin; Liu, Youqiang; Huang, Rui; Wang, Zhiyong

    2017-06-01

    High concentrating PV systems rely on large Fresnel lens that must be precisely oriented in the direction of the Sun to maintain high concentration ratio. We propose a new Fresnel lens design method combining equal-width and equal-height of grooves in this paper based on the principle of focused spot maximum energy. In the ring band near the center of Fresnel lens, the design with equal-width grooves is applied, and when the given condition is reached, the design with equal-height grooves is introduced near the edges of the Fresnel lens, which ensures all the lens grooves are planar. In this paper, we establish a Fresnel lens design example model by Solidworks, and simulate it with the software ZEMAX. An experimental test platform is built to test, and the simulation correctness is proved by experiments. Experimental result shows the concentrating efficiency of this example is 69.3%, slightly lower than the simulation result 75.1%.

  19. Alternative Asbestos Control Method and the Asbestos Releasability Research

    EPA Science Inventory

    Alternative Asbestos Control Method shows promise in speed, cost, and efficiency if equally protective. ORD conducted side by side test of AACM vs NESHAP on identical asbestos-containing buildings at Fort Chaffee. This abstract and presentation are based, at least in part, on pr...

  20. Thresholding histogram equalization.

    PubMed

    Chuang, K S; Chen, S; Hwang, I M

    2001-12-01

    The drawbacks of adaptive histogram equalization techniques are the loss of definition on the edges of the object and overenhancement of noise in the images. These drawbacks can be avoided if the noise is excluded in the equalization transformation function computation. A method has been developed to separate the histogram into zones, each with its own equalization transformation. This method can be used to suppress the nonanatomic noise and enhance only certain parts of the object. This method can be combined with other adaptive histogram equalization techniques. Preliminary results indicate that this method can produce images with superior contrast.

  1. An EBSD Investigation of Ultrafine-Grain Titanium for Biomedical Applications

    DTIC Science & Technology

    2015-09-21

    angular pressing (ECAP) using a Conform scheme followed by rod drawing. The microstructure was found to be bimodal consisting of relatively coarse...produced for medical implants. The UFG ma- terial was obtained by equal channel angular pressing (ECAP) using a Conform scheme followed by rod drawing...1–6]. The method is based on severe plastic deformation (SPD) and typically includes warm equal-channel angular pressing (ECAP) followed by ether cold

  2. Using ROC Curves to Choose Minimally Important Change Thresholds when Sensitivity and Specificity Are Valued Equally: The Forgotten Lesson of Pythagoras. Theoretical Considerations and an Example Application of Change in Health Status

    PubMed Central

    Froud, Robert; Abel, Gary

    2014-01-01

    Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472

  3. Dispersive traveling wave solutions of the Equal-Width and Modified Equal-Width equations via mathematical methods and its applications

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar

    2018-06-01

    The Equal-Width and Modified Equal-Width equations are used as a model in partial differential equations for the simulation of one-dimensional wave transmission in nonlinear media with dispersion processes. In this article we have employed extend simple equation method and the exp(-varphi(ξ)) expansion method to construct the exact traveling wave solutions of equal width and modified equal width equations. The obtained results are novel and have numerous applications in current areas of research in mathematical physics. It is exposed that our method, with the help of symbolic computation, provides a effective and powerful mathematical tool for solving different kind nonlinear wave problems.

  4. RIVERINE ASSESSMENT USING MACROINVERTEBRATES: ALL METHODS ARE NOT CREATED EQUAL

    EPA Science Inventory

    In 1999, we compared six benthic macroinvertebrate field sampling methods for nonwadeable streams based on those developed for three major programs (EMAP-SW, NAWQA, and Ohio EPA), at each of sixty sites across four tributaries to the Ohio River. Water chemistry samples and physi...

  5. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  6. Automated retina identification based on multiscale elastic registration.

    PubMed

    Figueiredo, Isabel N; Moura, Susana; Neves, Júlio S; Pinto, Luís; Kumar, Sunil; Oliveira, Carlos M; Ramos, João D

    2016-12-01

    In this work we propose a novel method for identifying individuals based on retinal fundus image matching. The method is based on the image registration of retina blood vessels, since it is known that the retina vasculature of an individual is a signature, i.e., a distinctive pattern of the individual. The proposed image registration consists of a multiscale affine registration followed by a multiscale elastic registration. The major advantage of this particular two-step image registration procedure is that it is able to account for both rigid and non-rigid deformations either inherent to the retina tissues or as a result of the imaging process itself. Afterwards a decision identification measure, relying on a suitable normalized function, is defined to decide whether or not the pair of images belongs to the same individual. The method is tested on a data set of 21721 real pairs generated from a total of 946 retinal fundus images of 339 different individuals, consisting of patients followed in the context of different retinal diseases and also healthy patients. The evaluation of its performance reveals that it achieves a very low false rejection rate (FRR) at zero FAR (the false acceptance rate), equal to 0.084, as well as a low equal error rate (EER), equal to 0.053. Moreover, the tests performed by using only the multiscale affine registration, and discarding the multiscale elastic registration, clearly show the advantage of the proposed approach. The outcome of this study also indicates that the proposed method is reliable and competitive with other existing retinal identification methods, and forecasts its future appropriateness and applicability in real-life applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Unbiased nonorthogonal bases for tomographic reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sainz, Isabel; Klimov, Andrei B.; Roa, Luis

    2010-05-15

    We have developed a general method for constructing a set of nonorthogonal bases with equal separations between all different basis states in prime dimensions. The results are that the corresponding biorthogonal counterparts are pairwise unbiased with the components of the original bases. Using these bases, we derive an explicit expression for the optimal tomography in nonorthogonal bases. A special two-dimensional case is analyzed separately.

  8. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  9. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.

  10. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    PubMed

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  11. Psychiatric Diagnostic Interviews for Children and Adolescents: A Comparative Study

    ERIC Educational Resources Information Center

    Angold, Adrian; Erkanli, Alaattin; Copeland, William; Goodman, Robert; Fisher, Prudence W.; Costello, E. Jane

    2012-01-01

    Objective: To compare examples of three styles of psychiatric interviews for youth: the Diagnostic Interview Schedule for Children (DISC) ("respondent-based"), the Child and Adolescent Psychiatric Assessment (CAPA) ("interviewer-based"), and the Development and Well-Being Assessment (DAWBA) ("expert judgment"). Method: Roughly equal numbers of…

  12. General hybrid projective complete dislocated synchronization with non-derivative and derivative coupling based on parameter identification in several chaotic and hyperchaotic systems

    NASA Astrophysics Data System (ADS)

    Sun, Jun-Wei; Shen, Yi; Zhang, Guo-Dong; Wang, Yan-Feng; Cui, Guang-Zhao

    2013-04-01

    According to the Lyapunov stability theorem, a new general hybrid projective complete dislocated synchronization scheme with non-derivative and derivative coupling based on parameter identification is proposed under the framework of drive-response systems. Every state variable of the response system equals the summation of the hybrid drive systems in the previous hybrid synchronization. However, every state variable of the drive system equals the summation of the hybrid response systems while evolving with time in our method. Complete synchronization, hybrid dislocated synchronization, projective synchronization, non-derivative and derivative coupling, and parameter identification are included as its special item. The Lorenz chaotic system, Rössler chaotic system, memristor chaotic oscillator system, and hyperchaotic Lü system are discussed to show the effectiveness of the proposed methods.

  13. Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.

    PubMed

    Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B

    1995-11-01

    Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.

  14. Assessment of Physical Activity, Exercise Self-Efficacy, and Stages of Change in College Students Using a Street-Based Survey Method

    ERIC Educational Resources Information Center

    Leenders, Nicole Y. J. M.; Silver, Lorraine Wallace; White, Susan L.; Buckworth, Janet; Sherman, W. Michael

    2002-01-01

    This study assessed the level of physical activity, exercise self-efficacy, and stages of change for exercise behavior among college students at a large midwestern university using a street-based survey method. The 50% response rate produced 925 student responses comprising 95% as young ([less than or equal to]24 years of age), 53% female, and 79%…

  15. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations That Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + definite integral of g(x, t)F(t,y(t))dt with limits between 0 and 1,0 less than or equal to x les than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integral equations arise, e.g., when one applied Green's function techniques to nonlinear two-point boundary value problems of the form y "(x) =f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and y(l) = y(sub l), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trepezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal rule, thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  16. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations that Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + integral(0 to 1) g(x,t) F(t, y(t)) dt, 0 less than or equal to x less than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integrals equations arise, e.g., when one applies Green's function techniques to nonlinear two-point boundary value problems of the form U''(x) = f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and g(l) = y(sub 1), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trapezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  17. Deflection corridors of abdomen and thorax in oblique side impacts using equal stress equal velocity approach: comparison with other normalization methods.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Humm, John; Pintar, Frank A

    2014-10-01

    The first objective of the study was to determine the thorax and abdomen deflection time corridors using the equal stress equal velocity approach from oblique side impact sled tests with postmortem human surrogates fitted with chestbands. The second purpose of the study was to generate deflection time corridors using impulse momentum methods and determine which of these methods best suits the data. An anthropometry-specific load wall was used. Individual surrogate responses were normalized to standard midsize male anthropometry. Corridors from the equal stress equal velocity approach were very similar to those from impulse momentum methods, thus either method can be used for this data. Present mean and plus/minus one standard deviation abdomen and thorax deflection time corridors can be used to evaluate dummies and validate complex human body finite element models.

  18. Polarization independent thermally tunable erbium-doped fiber amplifier gain equalizer using a cascaded Mach-Zehnder coupler.

    PubMed

    Sahu, P P

    2008-02-10

    A thermally tunable erbium-doped fiber amplifier (EDFA) gain equalizer filter based on compact point symmetric cascaded Mach-Zehnder (CMZ) coupler is presented with its mathematical model and is found to be polarization dependent due to stress anisotropy caused by local heating for thermo-optic phase change from its mathematical analysis. A thermo-optic delay line structure with a stress releasing groove is proposed and designed for the reduction of polarization dependent characteristics of the high index contrast point symmetric delay line structure of the device. It is found from thermal analysis by using an implicit finite difference method that temperature gradients of the proposed structure, which mainly causes the release of stress anisotropy, is approximately nine times more than that of the conventional structure. It is also seen that the EDFA gain equalized spectrum by using the point symmetric CMZ device based on the proposed structure is almost polarization independent.

  19. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  1. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  2. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy logical relationships.

    PubMed

    Chen, Shyi-Ming; Chen, Shen-Wen

    2015-03-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.

  3. Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation

    NASA Astrophysics Data System (ADS)

    Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.

    2018-04-01

    Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.

  4. Optimal quantum cloning based on the maximin principle by using a priori information

    NASA Astrophysics Data System (ADS)

    Kang, Peng; Dai, Hong-Yi; Wei, Jia-Hua; Zhang, Ming

    2016-10-01

    We propose an optimal 1 →2 quantum cloning method based on the maximin principle by making full use of a priori information of amplitude and phase about the general cloned qubit input set, which is a simply connected region enclosed by a "longitude-latitude grid" on the Bloch sphere. Theoretically, the fidelity of the optimal quantum cloning machine derived from this method is the largest in terms of the maximin principle compared with that of any other machine. The problem solving is an optimization process that involves six unknown complex variables, six vectors in an uncertain-dimensional complex vector space, and four equality constraints. Moreover, by restricting the structure of the quantum cloning machine, the optimization problem is simplified as a three-real-parameter suboptimization problem with only one equality constraint. We obtain the explicit formula for a suboptimal quantum cloning machine. Additionally, the fidelity of our suboptimal quantum cloning machine is higher than or at least equal to that of universal quantum cloning machines and phase-covariant quantum cloning machines. It is also underlined that the suboptimal cloning machine outperforms the "belt quantum cloning machine" for some cases.

  5. A method of selecting forest sites for air pollution study

    Treesearch

    Sreedevi K. Bringi; Thomas A. Seliga; Leon S. Dochinger

    1981-01-01

    Presents a method of selecting suitable forested areas for meaningful assessments of air pollution effects. The approach is based on the premise that environmental influences can significantly affect the forest-air pollution relationship, and that it is, therefore, desirable to equalize such influences at different sites. From existing data on environmental factors and...

  6. Compilation of load spectrum of loader drive axle

    NASA Astrophysics Data System (ADS)

    Wei, Yongxiang; Zhu, Haoyue; Tang, Heng; Yuan, Qunwei

    2018-03-01

    In order to study the preparation method of gear fatigue load spectrum for loaders, the load signal of four typical working conditions of loader is collected. The signal that reflects the law of load change is obtained by preprocessing the original signal. The torque of the drive axle is calculated by using the rain flow counting method. According to the operating time ratio of each working condition, the two dimensional load spectrum based on the real working conditions of the drive axle of loader is established by the cycle extrapolation and synthesis method. The two-dimensional load spectrum is converted into one-dimensional load spectrum by means of the mean of torque equal damage method. Torque amplification includes the maximum load torque of the main reduction gear. Based on the theory of equal damage, the accelerated cycles are calculated. In this way, the load spectrum of the loading condition of the drive axle is prepared to reflect loading condition of the loader. The load spectrum can provide reference for fatigue life test and life prediction of loader drive axle.

  7. Equalization filters for multiple-channel electromyogram arrays

    PubMed Central

    Clancy, Edward A.; Xia, Hongfang; Christie, Anita; Kamen, Gary

    2007-01-01

    Multiple channels of electromyogram activity are frequently transduced via electrodes, then combined electronically to form one electrophysiologic recording, e.g. bipolar, linear double difference and Laplacian montages. For high quality recordings, precise gain and frequency response matching of the individual electrode potentials is achieved in hardware (e.g., an instrumentation amplifier for bipolar recordings). This technique works well when the number of derived signals is small and the montages are pre-determined. However, for array electrodes employing a variety of montages, hardware channel matching can be expensive and tedious, and limits the number of derived signals monitored. This report describes a method for channel matching based on the concept of equalization filters. Monopolar potentials are recorded from each site without precise hardware matching. During a calibration phase, a time-varying linear chirp voltage is applied simultaneously to each site and recorded. Based on the calibration recording, each monopolar channel is digitally filtered to “correct” for (equalize) differences in the individual channels, and then any derived montages subsequently created. In a hardware demonstration system, the common mode rejection ratio (at 60 Hz) of bipolar montages improved from 35.2 ± 5.0 dB (prior to channel equalization) to 69.0 ± 5.0 dB (after equalization). PMID:17614134

  8. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  9. An anti-barotrauma system for preventing barotrauma during hyperbaric oxygen therapy.

    PubMed

    Song, Moon; Hoon, Se Jeon; Shin, Tae Min

    2018-01-01

    In the present study, a tympanometry-based anti-barotrauma (ABT) device was designed using eardrum admittance measurements to develop an objective method of preventing barotrauma that occurs during hyperbaric oxygen (HBO₂) therapy. The middle ear space requires active equalization, and barotrauma of these tissues during HBO₂therapy constitutes the most common treatment-associated injury. Decongestant nasal sprays and nasal steroids are used, but their efficacy is questionable to prevent middle ear barotrauma (MEB) during HBO₂ treatment. Accordingly, a tympanometry-based ABT device was designed using eardrum admittance measurements to develop an objective method for preventing MEB, which causes pain and injury, and represents one of the principal reasons for patients to stop treatment. This study was conducted to test a novel technology that can be used to measure transmembrane pressures, and provide chamber attendants with real-time feedback regarding the patient's equalization status prior to the onset of pain or injury. Eardrum admittance values were measured according to pressure changes inside a hyperbaric oxygen chamber while the system was fitted to the subject. When the pressure increased to above 200 daPa, eardrum admittance decreased to 16.255% of prepressurization levels. After pressure equalization was achieved, eardrum admittance recovered to 95.595% of prepressurization levels. A one-way repeated measures analysis of variance contrast test was performed on eardrum admittance before pressurization versus during pressurization, and before pressurization versus after pressure equalization. The analysis revealed significant differences at all points during pressurization (P⟨0.001), but no significant difference after pressure equalization was achieved. This ABT device can provide objective feedback reflecting eardrum condition to the patient and the chamber operator during HBO₂ therapy. Copyright© Undersea and Hyperbaric Medical Society.

  10. Hydrometallurgical recovery of germanium from coal gasification fly ash: pilot plant scale evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arroyo, F.; Fernandez-Pereira, C.; Olivares, J.

    2009-04-15

    In this article, a hydrometallurgical method for the selective recovery of germanium from fly ash (FA) has been tested at pilot plant scale. The pilot plant flowsheet comprised a first stage of water leaching of FA, and a subsequent selective recovery of the germanium from the leachate by solvent extraction method. The solvent extraction method was based on Ge complexation with catechol in an aqueous solution followed by the extraction of the Ge-catechol complex (Ge(C{sub 6}H{sub 4}O{sub 2}){sub 3}{sup 2-}) with an extracting organic reagent (trioctylamine) diluted in an organic solvent (kerosene), followed by the subsequent stripping of the organicmore » extract. The process has been tested on a FA generated in an integrated gasification with combined cycle (IGCC) process. The paper describes the designed 5 kg/h pilot plant and the tests performed on it. Under the operational conditions tested, approximately 50% of germanium could be recovered from FA after a water extraction at room temperature. Regarding the solvent extraction method, the best operational conditions for obtaining a concentrated germanium-bearing solution practically free of impurities were as follows: extraction time equal to 20 min; aqueous phase/organic phase volumetric ratio equal to 5; stripping with 1 M NaOH, stripping time equal to 30 min, and stripping phase/organic phase volumetric ratio equal to 5. 95% of germanium were recovered from water leachates using those conditions.« less

  11. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Treesearch

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  12. A combined electronegativity equalization and electrostatic potential fit method for the determination of atomic point charges.

    PubMed

    Berente, Imre; Czinki, Eszter; Náray-Szabó, Gábor

    2007-09-01

    We report an approach for the determination of atomic monopoles of macromolecular systems using connectivity and geometry parameters alone. The method is appropriate also for the calculation of charge distributions based on the quantum mechanically determined wave function and does not suffer from the mathematical instability of other electrostatic potential fit methods. Copyright 2007 Wiley Periodicals, Inc.

  13. Impact of equalizing currents on losses and torque ripples in electrical machines with fractional slot concentrated windings

    NASA Astrophysics Data System (ADS)

    Toporkov, D. M.; Vialcev, G. B.

    2017-10-01

    The implementation of parallel branches is a commonly used manufacturing method of the realizing of fractional slot concentrated windings in electrical machines. If the rotor eccentricity is enabled in a machine with parallel branches, the equalizing currents can arise. The simulation approach of the equalizing currents in parallel branches of an electrical machine winding based on magnetic field calculation by using Finite Elements Method is discussed in the paper. The high accuracy of the model is provided by the dynamic improvement of the inductances in the differential equation system describing a machine. The pre-computed table flux linkage functions are used for that. The functions are the dependences of the flux linkage of parallel branches on the branches currents and rotor position angle. The functions permit to calculate self-inductances and mutual inductances by partial derivative. The calculated results obtained for the electric machine specimen are presented. The results received show that the adverse combination of design solutions and the rotor eccentricity leads to a high value of the equalizing currents and windings heating. Additional torque ripples also arise. The additional ripples harmonic content is not similar to the cogging torque or ripples caused by the rotor eccentricity.

  14. Dynamics of total electron content distribution during strong geomagnetic storms

    NASA Astrophysics Data System (ADS)

    Astafyeva, E. I.; Afraimovich, E. L.; Kosogorov, E. A.

    We worked out a new method of mapping of total electron content TEC equal lines displacement velocity The method is based on the technique of global absolute vertical TEC value mapping Global Ionospheric Maps technique GIM GIM with 2-hours time resolution are available from Internet underline ftp cddisa gsfc nasa gov in standard IONEX-files format We determine the displacement velocity absolute value as well as its wave vector orientation from increments of TEC x y derivatives and TEC time derivative for each standard GIM cell 5 in longitude to 2 5 in latitude Thus we observe global traveling of TEC equal lines but we also can estimate the velocity of these line traveling Using the new method we observed anomalous rapid accumulation of the ionosphere plasma at some confined area due to the depletion of the ionization at the other spacious territories During the main phase of the geomagnetic storm on 29-30 October 2003 very large TEC enhancements appeared in the southwest of North America TEC value in that area reached up to 200 TECU 1 TECU 10 16 m -2 It was found that maximal velocity of TEC equal lines motion exceeded 1500 m s and the mean value of the velocity was about 400 m s Azimuth of wave vectors of TEC equal lines were orientated toward the center of region with anomaly high values of TEC the southwest of North America It should be noted that maximal TEC values during geomagnetically quiet conditions is about 60-80 TECU the value of TEC equal lines

  15. Integrated Force Method Solution to Indeterminate Structural Mechanics Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.

    2004-01-01

    Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.

  16. Finger vein recognition based on finger crease location

    NASA Astrophysics Data System (ADS)

    Lu, Zhiying; Ding, Shumeng; Yin, Jing

    2016-07-01

    Finger vein recognition technology has significant advantages over other methods in terms of accuracy, uniqueness, and stability, and it has wide promising applications in the field of biometric recognition. We propose using finger creases to locate and extract an object region. Then we use linear fitting to overcome the problem of finger rotation in the plane. The method of modular adaptive histogram equalization (MAHE) is presented to enhance image contrast and reduce computational cost. To extract the finger vein features, we use a fusion method, which can obtain clear and distinguishable vein patterns under different conditions. We used the Hausdorff average distance algorithm to examine the recognition performance of the system. The experimental results demonstrate that MAHE can better balance the recognition accuracy and the expenditure of time compared with three other methods. Our resulting equal error rate throughout the total procedure was 3.268% in a database of 153 finger vein images.

  17. Beam splitter and method for generating equal optical path length beams

    DOEpatents

    Qian, Shinan; Takacs, Peter

    2003-08-26

    The present invention is a beam splitter for splitting an incident beam into first and second beams so that the first and second beams have a fixed separation and are parallel upon exiting. The beam splitter includes a first prism, a second prism, and a film located between the prisms. The first prism is defined by a first thickness and a first perimeter which has a first major base. The second prism is defined by a second thickness and a second perimeter which has a second major base. The film is located between the first major base and the second major base for splitting the incident beam into the first and second beams. The first and second perimeters are right angle trapezoidal shaped. The beam splitter is configured for generating equal optical path length beams.

  18. Design of integrated all optical digital to analog converter (DAC) using 2D photonic crystals

    NASA Astrophysics Data System (ADS)

    Moniem, Tamer A.; El-Din, Eman S.

    2017-11-01

    A novel design of all optical 3 bit digital to analog (DAC) converter will be presented in this paper based on 2 Dimension photonic crystals (PhC). The proposed structure is based on the photonic crystal ring resonators (PCRR) with combining the nonlinear Kerr effect on the PCRR. The total size of the proposed optical 3 bit DAC is equal to 44 μm × 37 μm of 2D square lattice photonic crystals of silicon rods with refractive index equal to 3.4. The finite different time domain (FDTD) and Plane Wave Expansion (PWE) methods are used to back the overall operation of the proposed optical DAC.

  19. ASSESSING SUSCEPTIBILITY FROM EARLY-LIFE EXPOSURE TO CARCINOGENS

    EPA Science Inventory

    Cancer risks from childhood exposures to chemicals are generally analyzed using methods based upon exposure from adults, which assumes chemicals are equally potent for inducing risks at these different lifestages. Published literature was evaluated to determine whether there was...

  20. Hydration of Li+ -ion in atom-bond electronegativity equalization method-7P water: a molecular dynamics simulation study.

    PubMed

    Li, Xin; Yang, Zhong-Zhi

    2005-02-22

    We have carried out molecular dynamics simulations of a Li(+) ion in water over a wide range of temperature (from 248 to 368 K). The simulations make use of the atom-bond electronegativity equalization method-7P water model, a seven-site flexible model with fluctuating charges, which has accurately reproduced many bulk water properties. The recently constructed Li(+)-water interaction potential through fitting to the experimental and ab initio gas-phase binding energies and to the measured structures for Li(+)-water clusters is adopted in the simulations. ABEEM was proposed and developed in terms of partitioning the electron density into atom and bond regions and using the electronegativity equalization method (EEM) and the density functional theory (DFT). Based on a combination of the atom-bond electronegativity equalization method and molecular mechanics (ABEEM/MM), a new set of water-water and Li(+)-water potentials, successfully applied to ionic clusters Li(+)(H(2)O)(n)(n=1-6,8), are further investigated in an aqueous solution of Li(+) in the present paper. Two points must be emphasized in the simulations: first, the model allows for the charges on the interacting sites fluctuating as a function of time; second, the ABEEM-7P model has applied the parameter k(lp,H)(R(lp,H)) to explicitly describe the short-range interaction of hydrogen bond in the hydrogen bond interaction region, and has a new description for the hydrogen bond. The static, dynamic, and thermodynamic properties have been studied in detail. In addition, at different temperatures, the structural properties such as radial distribution functions, and the dynamical properties such as diffusion coefficients and residence times of the water molecules in the first hydration shell of Li(+), are also simulated well. These simulation results show that the ABEEM/MM-based water-water and Li(+)-water potentials appear to be robust giving the overall characteristic hydration properties in excellent agreement with experiments and other molecular dynamics simulations on similar system.

  1. Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models.

    PubMed

    Le Muzic, M; Mindek, P; Sorger, J; Autin, L; Goodsell, D; Viola, I

    2016-06-01

    In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.

  2. Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models

    PubMed Central

    Le Muzic, M.; Mindek, P.; Sorger, J.; Autin, L.; Goodsell, D.; Viola, I.

    2017-01-01

    In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes. PMID:28344374

  3. An efficient multilevel optimization method for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  4. Microsphere-based gradient implants for osteochondral regeneration: a long-term study in sheep

    PubMed Central

    Mohan, Neethu; Gupta, Vineet; Sridharan, Banu Priya; Mellott, Adam J; Easley, Jeremiah T; Palmer, Ross H; Galbraith, Richard A; Key, Vincent H; Berkland, Cory J; Detamore, Michael S

    2015-01-01

    Background: The microfracture technique for cartilage repair has limited ability to regenerate hyaline cartilage. Aim: The current study made a direct comparison between microfracture and an osteochondral approach with microsphere-based gradient plugs. Materials & methods: The PLGA-based scaffolds had opposing gradients of chondroitin sulfate and β-tricalcium phosphate. A 1-year repair study in sheep was conducted. Results: The repair tissues in the microfracture were mostly fibrous and had scattered fissures with degenerative changes. Cartilage regenerated with the gradient plugs had equal or superior mechanical properties; had lacunated cells and stable matrix as in hyaline cartilage. Conclusion: This first report of gradient scaffolds in a long-term, large animal, osteochondral defect demonstrated potential for equal or better cartilage repair than microfracture. PMID:26418471

  5. Shear, principal, and equivalent strains in equal-channel angular deformation

    NASA Astrophysics Data System (ADS)

    Xia, K.; Wang, J.

    2001-10-01

    The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.

  6. Randomized, Controlled Trial of CBT Training for PTSD Providers

    DTIC Science & Technology

    2016-10-01

    implement and evaluate a cost effective, web based self-paced training program to provide skills-oriented continuing education for mental health...professionals. The objective is to learn whether novel, internet-based training methods, with or without web -centered supervision, may provide an...condition: a) Web -based training plus web -centered supervision; b) Web - based training alone; and c) Training-as-usual control group. An equal number of

  7. Parental share in public and domestic spheres: a population study on gender equality, death, and sickness.

    PubMed

    Månsdotter, Anna; Lindholm, Lars; Lundberg, Michael; Winkvist, Anna; Ohman, Ann

    2006-07-01

    Examine the relation between aspects of gender equality and population health based on the premise that sex differences in health are mainly caused by the gender system. All Swedish couples (98 240 people) who had their first child together in 1978. The exposure of gender equality is shown by the parents' division of income and occupational position (public sphere), and parental leave and temporary child care (domestic sphere). People were classified by these indicators during 1978-1980 into different categories; those on an equal footing with their partner and those who were traditionally or untraditionally unequal. Health is measured by the outcomes of death during 1981-2001 and sickness absence during 1986-2000. Data are obtained by linking individual information from various national sources. The statistical method used is multiple logistic regressions with odds ratios as estimates of relative risks. From the public sphere is shown that traditionally unequal women have decreased health risks compared with equal women, while traditionally unequal men tend to have increased health risks compared with equal men. From the domestic sphere is indicated that both women and men run higher risks of death and sickness when being traditionally unequal compared with equal. Understanding the relation between gender equality and health, which was found to depend on sex, life sphere, and inequality type, seems to require a combination of the hypotheses of convergence, stress and expansion.

  8. Automated segmentation and isolation of touching cell nuclei in cytopathology smear images of pleural effusion using distance transform watershed method

    NASA Astrophysics Data System (ADS)

    Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko

    2017-06-01

    The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.

  9. Mitigating component performance variation

    DOEpatents

    Gara, Alan G.; Sylvester, Steve S.; Eastep, Jonathan M.; Nagappan, Ramkumar; Cantalupo, Christopher M.

    2018-01-09

    Apparatus and methods may provide for characterizing a plurality of similar components of a distributed computing system based on a maximum safe operation level associated with each component and storing characterization data in a database and allocating non-uniform power to each similar component based at least in part on the characterization data in the database to substantially equalize performance of the components.

  10. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    PubMed

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  11. EQUALS Investigations: Growth Patterns.

    ERIC Educational Resources Information Center

    Mayfield, Karen; Whitlow, Robert

    EQUALS is a teacher education program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. The EQUALS program supports a problem-solving approach to mathematics which has students working in groups, uses active assessment methods, and incorporates a broad mathematics…

  12. Independent component analysis based channel equalization for 6 × 6 MIMO-OFDM transmission over few-mode fiber.

    PubMed

    He, Zhixue; Li, Xiang; Luo, Ming; Hu, Rong; Li, Cai; Qiu, Ying; Fu, Songnian; Yang, Qi; Yu, Shaohua

    2016-05-02

    We propose and experimentally demonstrate two independent component analysis (ICA) based channel equalizers (CEs) for 6 × 6 MIMO-OFDM transmission over few-mode fiber. Compared with the conventional channel equalizer based on training symbols (TSs-CE), the proposed two ICA-based channel equalizers (ICA-CE-I and ICA-CE-II) can achieve comparable performances, while requiring much less training symbols. Consequently, the overheads for channel equalization can be substantially reduced from 13.7% to 0.4% and 2.6%, respectively. Meanwhile, we also experimentally investigate the convergence speed of the proposed ICA-based CEs.

  13. Method for enhancing signals transmitted over optical fibers

    DOEpatents

    Ogle, James W.; Lyons, Peter B.

    1983-01-01

    A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.

  14. Evaluation of two outlier-detection-based methods for detecting tissue-selective genes from microarray data.

    PubMed

    Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro

    2007-05-01

    Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent's non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent's method is not suitable for ROKU.

  15. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    PubMed

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  16. Robust Fuzzy Controllers Using FPGAs

    NASA Technical Reports Server (NTRS)

    Monroe, Author Gene S., Jr.

    2007-01-01

    Electro-mechanical device controllers typically come in one of three forms, proportional (P), Proportional Derivative (PD), and Proportional Integral Derivative (PID). Two methods of control are discussed in this paper; they are (1) the classical technique that requires an in-depth mathematical use of poles and zeros, and (2) the fuzzy logic (FL) technique that is similar to the way humans think and make decisions. FL controllers are used in multiple industries; examples include control engineering, computer vision, pattern recognition, statistics, and data analysis. Presented is a study on the development of a PD motor controller written in very high speed hardware description language (VHDL), and implemented in FL. Four distinct abstractions compose the FL controller, they are the fuzzifier, the rule-base, the fuzzy inference system (FIS), and the defuzzifier. FL is similar to, but different from, Boolean logic; where the output value may be equal to 0 or 1, but it could also be equal to any decimal value between them. This controller is unique because of its VHDL implementation, which uses integer mathematics. To compensate for VHDL's inability to synthesis floating point numbers, a scale factor equal to 10(sup (N/4) is utilized; where N is equal to data word size. The scaling factor shifts the decimal digits to the left of the decimal point for increased precision. PD controllers are ideal for use with servo motors, where position control is effective. This paper discusses control methods for motion-base platforms where a constant velocity equivalent to a spectral resolution of 0.25 cm(exp -1) is required; however, the control capability of this controller extends to various other platforms.

  17. Interfacial tension measurement of immiscible liq uids using a capillary tube

    NASA Technical Reports Server (NTRS)

    Rashidnia, N.; Balasubramaniam, R.; Delsignore, D.

    1992-01-01

    The interfacial tension of immiscible liquids is an important thermophysical property that is useful in the behavior of liquids both in microgravity (Martinez et al. (1987) and Karri and Mathur (1988)) and in enhanced oil recovery processes under normal gravity (Slattery (1974)). Many techniques are available for its measurement, such as the ring method, drop weight method, spinning drop method, and capillary height method (Adamson (1960) and Miller and Neogi (1985)). Karri and Mathur mention that many of the techniques use equations that contain a density difference term and are inappropriate for equal density liquids. They reported a new method that is suitable for both equal and unequal density liquids. In their method, a capillary tube forms one of the legs of a U-tube. The interfacial tension is related to the heights of the liquids in the cups of the U-tube above the interface in the capillary. Our interest in this area arose from a need to measure small interfacial tension (around 1 mN/m) for a vegetable oil/silicon oil system that was used in a thermocapillary drop migration experiment (Rashidnia and Balasubramaniam (1991)). In our attempts to duplicate the method proposed by Karri and Mathur, we found it quite difficult to anchor the interface inside the capillary tube; small differences of the liquid heights in the cups drove the interface out of the capillary. We present an alternative method using a capillary tube to measure the interfacial tensions of liquids of equal or unequal density. The method is based on the combined capillary rises of both liquids in the tube.

  18. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  19. 101 Short Problems from EQUALS = 101 Problemas Cortos del programma EQUALS.

    ERIC Educational Resources Information Center

    Stenmark, Jean Kerr, Ed.

    EQUALS is a teacher advisory program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. The program supports a problem-solving approach to mathematics, including having students working in groups, using active assessment methods, and incorporating a broad mathematics…

  20. Using recurrent neural networks for adaptive communication channel equalization.

    PubMed

    Kechriotis, G; Zervas, E; Manolakos, E S

    1994-01-01

    Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.

  1. Method for solvent extraction with near-equal density solutions

    DOEpatents

    Birdwell, Joseph F.; Randolph, John D.; Singh, S. Paul

    2001-01-01

    Disclosed is a modified centrifugal contactor for separating solutions of near equal density. The modified contactor has a pressure differential establishing means that allows the application of a pressure differential across fluid in the rotor of the contactor. The pressure differential is such that it causes the boundary between solutions of near-equal density to shift, thereby facilitating separation of the phases. Also disclosed is a method of separating solutions of near-equal density.

  2. Arbitrary magnetic field gradient waveform correction using an impulse response based pre-equalization technique.

    PubMed

    Goora, Frédéric G; Colpitts, Bruce G; Balcom, Bruce J

    2014-01-01

    The time-varying magnetic fields used in magnetic resonance applications result in the induction of eddy currents on conductive structures in the vicinity of both the sample under investigation and the gradient coils. These eddy currents typically result in undesired degradations of image quality for MRI applications. Their ubiquitous nature has resulted in the development of various approaches to characterize and minimize their impact on image quality. This paper outlines a method that utilizes the magnetic field gradient waveform monitor method to directly measure the temporal evolution of the magnetic field gradient from a step-like input function and extracts the system impulse response. With the basic assumption that the gradient system is sufficiently linear and time invariant to permit system theory analysis, the impulse response is used to determine a pre-equalized (optimized) input waveform that provides a desired gradient response at the output of the system. An algorithm has been developed that calculates a pre-equalized waveform that may be accurately reproduced by the amplifier (is physically realizable) and accounts for system limitations including system bandwidth, amplifier slew rate capabilities, and noise inherent in the initial measurement. Significant improvements in magnetic field gradient waveform fidelity after pre-equalization have been realized and are summarized. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. The NLM Indexing Initiative.

    PubMed Central

    Aronson, A. R.; Bodenreider, O.; Chang, H. F.; Humphrey, S. M.; Mork, J. G.; Nelson, S. J.; Rindflesch, T. C.; Wilbur, W. J.

    2000-01-01

    The objective of NLM's Indexing Initiative (IND) is to investigate methods whereby automated indexing methods partially or completely substitute for current indexing practices. The project will be considered a success if methods can be designed and implemented that result in retrieval performance that is equal to or better than the retrieval performance of systems based principally on humanly assigned index terms. We describe the current state of the project and discuss our plans for the future. PMID:11079836

  4. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  5. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  6. Structural and Network-based Methods for Knowledge-Based Systems

    DTIC Science & Technology

    2011-12-01

    depth) provide important information about knowledge gaps in the KB. For example, if SuccessEstimate (causes-EventEvent, Typhoid - Fever , 1, 3) is...equal to 0, it points toward lack of biological knowledge about Typhoid - Fever in our KB. Similar information can also be obtained from the...position of the consequent. ⋃ ( ( ) ) Therefore, if Q does not contain Typhoid - Fever , then obtaining

  7. Research on power equalization using a low-loss DC-DC chopper for lithium-ion batteries in electric vehicle

    NASA Astrophysics Data System (ADS)

    Wei, Y. W.; Liu, G. T.; Xiong, S. N.; Cheng, J. Z.; Huang, Y. H.

    2017-01-01

    In the near future, electric vehicle is entirely possible to replace traditional cars due to its zero pollution, small power consumption and low noise. Lithium-ion battery, which owns lots of advantages such as lighter and larger capacity and longer life, has been widely equipped in different electric cars all over the world. One disadvantage of this energy storage device is state of charge (SOC) difference among these cells in each series branch. If equalization circuit is not allocated for series-connected batteries, its safety and lifetime are declined due to over-charge or over-discharge happened, unavoidably. In this paper, a novel modularized equalization circuit, based on DC-DC chopper, is proposed to supply zero loss in theory. The proposed circuit works as an equalizer when Lithium-ion battery pack is charging or discharging or standing idle. Theoretical analysis and control method have been finished, respectively. Simulation and small scale experiments are applied to verify its real effect.

  8. Parental share in public and domestic spheres: a population study on gender equality, death, and sickness

    PubMed Central

    Månsdotter, Anna; Lindholm, Lars; Lundberg, Michael; Winkvist, Anna; Öhman, Ann

    2006-01-01

    Study objective Examine the relation between aspects of gender equality and population health based on the premise that sex differences in health are mainly caused by the gender system. Setting/participants All Swedish couples (98 240 people) who had their first child together in 1978. Design The exposure of gender equality is shown by the parents' division of income and occupational position (public sphere), and parental leave and temporary child care (domestic sphere). People were classified by these indicators during 1978–1980 into different categories; those on an equal footing with their partner and those who were traditionally or untraditionally unequal. Health is measured by the outcomes of death during 1981–2001 and sickness absence during 1986–2000. Data are obtained by linking individual information from various national sources. The statistical method used is multiple logistic regressions with odds ratios as estimates of relative risks. Main results From the public sphere is shown that traditionally unequal women have decreased health risks compared with equal women, while traditionally unequal men tend to have increased health risks compared with equal men. From the domestic sphere is indicated that both women and men run higher risks of death and sickness when being traditionally unequal compared with equal. Conclusions Understanding the relation between gender equality and health, which was found to depend on sex, life sphere, and inequality type, seems to require a combination of the hypotheses of convergence, stress and expansion. PMID:16790834

  9. Method for enhancing signals transmitted over optical fibers

    DOEpatents

    Ogle, J.W.; Lyons, P.B.

    1981-02-11

    A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber is disclosed. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.

  10. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  11. Image dehazing based on non-local saturation

    NASA Astrophysics Data System (ADS)

    Wang, Linlin; Zhang, Qian; Yang, Deyun; Hou, Yingkun; He, Xiaoting

    2018-04-01

    In this paper, a method based on non-local saturation algorithm is proposed to avoid block and halo effect for single image dehazing with dark channel prior. First we convert original image from RGB color space into HSV color space with the idea of non-local method. Image saturation is weighted equally by the size of fixed window according to image resolution. Second we utilize the saturation to estimate the atmospheric light value and transmission rate. Then through the function of saturation and transmission, the haze-free image is obtained based on the atmospheric scattering model. Comparing the results of existing methods, our method can restore image color and enhance contrast. We guarantee the proposed method with quantitative and qualitative evaluation respectively. Experiments show the better visual effect with high efficiency.

  12. Evolving cell models for systems and synthetic biology.

    PubMed

    Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio

    2010-03-01

    This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.

  13. Continuous Progress Education: An Ideal that Works.

    ERIC Educational Resources Information Center

    Jenkins, John M.

    1982-01-01

    Continuous progress education (CP) provides for the individualization of all significant aspects of learning, including materials, content, objectives, methods, pacing, and student-teacher relationships. It is based on the proposition that no general prescriptions are equally appropriate for all students. A brief description of Hood River Valley…

  14. A laboratory method for precisely determining the micro-volume-magnitudes of liquid efflux

    NASA Technical Reports Server (NTRS)

    Cloutier, R. L.

    1969-01-01

    Micro-volumetric quantities of ejected liquid are made to produce equal volumetric displacements of a more dense material. Weight measurements are obtained on the displaced heavier liquid and used to calculate volumes based upon the known density of the heavy medium.

  15. A least-squares finite element method for incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1992-01-01

    A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.

  16. A dynamic gain equalizer based on holographic polymer dispersed liquid crystal gratings

    NASA Astrophysics Data System (ADS)

    Xin, Zhaohui; Cai, Jiguang; Shen, Guotu; Yang, Baocheng; Zheng, Jihong; Gu, Lingjuan; Zhuang, Songlin

    2006-12-01

    The dynamic gain equalizer consisting of gratings made of holographic polymer dispersed liquid crystal is explored and the structure and principle presented. The properties of the holographic polymer dispersed liquid crystal grating are analyzed in light of the rigorous coupled-wave theory. Experimental study is also conducted in which a beam of infrared laser was incident to the grating sample and an alternating current electric field applied. The electro-optical properties of the grating and the influence of the applied field were observed. The results of the experiment agree with that of the theory quite well. The design method of the dynamic gain equalizer with the help of numerical simulation is presented too. The study shows that holographic polymer dispersed liquid crystal gratings have great potential to play a role in fiber optics communication.

  17. Equal Protection and Due Process: Contrasting Methods of Review under Fourteenth Amendment Doctrine.

    ERIC Educational Resources Information Center

    Hughes, James A.

    1979-01-01

    Argues that the Court has, at times, confused equal protection and due process methods of review, primarily by employing interest balancing in certain equal protection cases that should have been subjected to due process analysis. Available from Harvard Civil Rights-Civil Liberties Law Review, Harvard Law School, Cambridge, MA 02138; sc $4.00.…

  18. Investigation of advanced pre- and post-equalization schemes in high-order CAP modulation based high-speed indoor VLC transmission system

    NASA Astrophysics Data System (ADS)

    Wang, Yiguang; Chi, Nan

    2016-10-01

    Light emitting diodes (LEDs) based visible light communication (VLC) has been considered as a promising technology for indoor high-speed wireless access, due to its unique advantages, such as low cost, license free and high security. To achieve high-speed VLC transmission, carrierless amplitude and phase (CAP) modulation has been utilized for its lower complexity and high spectral efficiency. Moreover, to compensate the linear and nonlinear distortions such as frequency attenuation, sampling time offset, LED nonlinearity etc., series of pre- and post-equalization schemes should be employed in high-speed VLC systems. In this paper, we make an investigation on several advanced pre- and postequalization schemes for high-order CAP modulation based VLC systems. We propose to use a weighted preequalization technique to compensate the LED frequency attenuation. In post-equalization, a hybrid post equalizer is proposed, which consists of a linear equalizer, a Volterra series based nonlinear equalizer, and a decision-directed least mean square (DD-LMS) equalizer. Modified cascaded multi-modulus algorithm (M-CMMA) is employed to update the weights of the linear and the nonlinear equalizer, while DD-LMS can further improve the performance after the preconvergence. Based on high-order CAP modulation and these equalization schemes, we have experimentally demonstrated a 1.35-Gb/s, a 4.5-Gb/s and a 8-Gb/s high-speed indoor VLC transmission systems. The results show the benefit and feasibility of the proposed equalization schemes for high-speed VLC systems.

  19. Novel Estimation of Pilot Performance Characteristics

    NASA Technical Reports Server (NTRS)

    Bachelder, Edward N.; Aponso, Bimal

    2017-01-01

    Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.

  20. Troubling Gender Equality: Revisiting Gender Equality Work in the Famous Nordic Model Countries

    ERIC Educational Resources Information Center

    Edström, Charlotta; Brunila, Kristiina

    2016-01-01

    This article concerns gender equality work, that is, those educational and workplace activities that involve the promotion of gender equality. It is based on research conducted in Sweden and Finland, and focuses on the period during which the public sector has become more market-oriented and project-based all over the Nordic countries. The…

  1. Affective Teaching: A Method to Enhance Classroom Management

    ERIC Educational Resources Information Center

    Shechtman, Zipora; Leichtentritt, Judy

    2004-01-01

    The purpose of the study was to enhance classroom management in special education classrooms. "Affective teaching" was compared with "cognitive teaching" in 52 classrooms in Israel. Data was collected based on observations of three 90 minute lessons, equally divided into the two types of instruction. Results of MANOVA…

  2. Diffusion Cartograms for the Display of Periodic Table Data

    ERIC Educational Resources Information Center

    Winter, Mark J.

    2011-01-01

    Mapping methods employed by geographers, known as diffusion cartograms (diffusion-based density-equalizing maps), are used to present visually interesting and informative plots for data such as income, health, voting patterns, and resource availability. The algorithm involves changing the sizes of geographic regions such as countries or provinces…

  3. Simultaneous Inference Procedures for Means.

    ERIC Educational Resources Information Center

    Krishnaiah, P. R.

    Some aspects of simultaneous tests for means are reviewed. Specifically, the comparison of univariate or multivariate normal populations based on the values of the means or mean vectors when the variances or covariance matrices are equal is discussed. Tukey's and Dunnett's tests for multiple comparisons of means, Scheffe's method of examining…

  4. Evaluation of Two Outlier-Detection-Based Methods for Detecting Tissue-Selective Genes from Microarray Data

    PubMed Central

    Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro

    2007-01-01

    Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent’s non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent’s method is not suitable for ROKU. PMID:19936074

  5. Comparison of two adaptive temperature-based replica exchange methods applied to a sharp phase transition of protein unfolding-folding.

    PubMed

    Lee, Michael S; Olson, Mark A

    2011-06-28

    Temperature-based replica exchange (T-ReX) enhances sampling of molecular dynamics simulations by autonomously heating and cooling simulation clients via a Metropolis exchange criterion. A pathological case for T-ReX can occur when a change in state (e.g., folding to unfolding of a protein) has a large energetic difference over a short temperature interval leading to insufficient exchanges amongst replica clients near the transition temperature. One solution is to allow the temperature set to dynamically adapt in the temperature space, thereby enriching the population of clients near the transition temperature. In this work, we evaluated two approaches for adapting the temperature set: a method that equalizes exchange rates over all neighbor temperature pairs and a method that attempts to induce clients to visit all temperatures (dubbed "current maximization") by positioning many clients at or near the transition temperature. As a test case, we simulated the 57-residue SH3 domain of alpha-spectrin. Exchange rate equalization yielded the same unfolding-folding transition temperature as fixed-temperature ReX with much smoother convergence of this value. Surprisingly, the current maximization method yielded a significantly lower transition temperature, in close agreement with experimental observation, likely due to more extensive sampling of the transition state.

  6. Efficient sidelobe ASK based dual-function radar-communications

    NASA Astrophysics Data System (ADS)

    Hassanien, Aboulnasr; Amin, Moeness G.; Zhang, Yimin D.; Ahmad, Fauzia

    2016-05-01

    Recently, dual-function radar-communications (DFRC) has been proposed as means to mitigate the spectrum congestion problem. Existing amplitude-shift keying (ASK) methods for information embedding do not take full advantage of the highest permissable sidelobe level. In this paper, a new ASK-based signaling strategy for enhancing the signal-to-noise ratio (SNR) at the communication receiver is proposed. The proposed method employs one reference waveform and simultaneously transmits a number of orthogonal waveforms equals to the number of 1's in the binary sequence being embedded. 3 dB SNR gain is achieved using the proposed method as compared to existing sidelobe ASK methods. The effectiveness of the proposed information embedding strategy is verified using simulations examples.

  7. Text extraction method for historical Tibetan document images based on block projections

    NASA Astrophysics Data System (ADS)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  8. A novel load balanced energy conservation approach in WSN using biogeography based optimization

    NASA Astrophysics Data System (ADS)

    Kaushik, Ajay; Indu, S.; Gupta, Daya

    2017-09-01

    Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN

  9. A new family of stable elements for the Stokes problem based on a mixed Galerkin/least-squares finite element formulation

    NASA Technical Reports Server (NTRS)

    Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro

    1989-01-01

    Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.

  10. Equalization and detection for digital communication over nonlinear bandlimited satellite communication channels. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gutierrez, Alberto, Jr.

    1995-01-01

    This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.

  11. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  12. Can the impact of gender equality on health be measured? A cross-sectional study comparing measures based on register data with individual survey-based data.

    PubMed

    Sörlin, Ann; Öhman, Ann; Ng, Nawi; Lindholm, Lars

    2012-09-17

    The aim of this study was to investigate potential associations between gender equality at work and self-rated health. 2861 employees in 21 companies were invited to participate in a survey. The mean response rate was 49.2%. The questionnaire contained 65 questions, mainly on gender equality and health. Two logistic regression analyses were conducted to assess associations between (i) self-rated health and a register-based company gender equality index (OGGI), and (ii) self-rated health and self-rated gender equality at work. Even though no association was found between the OGGI and health, women who rated their company as "completely equal" or "quite equal" had higher odds of reporting "good health" compared to women who perceived their company as "not equal" (OR = 2.8, 95% confidence interval = 1.4 - 5.5 and OR = 2.73, 95% CI = 1.6-4.6). Although not statistically significant, we observed the same trends in men. The results were adjusted for age, highest education level, income, full or part-time employment, and type of company based on the OGGI. No association was found between gender equality in companies, measured by register-based index (OGGI), and health. However, perceived gender equality at work positively affected women's self-rated health but not men's. Further investigations are necessary to determine whether the results are fully credible given the contemporary health patterns and positions in the labour market of women and men or whether the results are driven by selection patterns.

  13. Newer developments on self-modeling curve resolution implementing equality and unimodality constraints.

    PubMed

    Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert

    2014-05-27

    Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Students’ misconception on equal sign

    NASA Astrophysics Data System (ADS)

    Kusuma, N. F.; Subanti, S.; Usodo, B.

    2018-04-01

    Equivalence is a very general relation in mathematics. The focus of this article is narrowed specifically to an equal sign in the context of equations. The equal sign is a symbol of mathematical equivalence. Studies have found that many students do not have a deep understanding of equivalence. Students often misinterpret the equal sign as an operational rather than a symbol of mathematical equivalence. This misinterpretation of the equal sign will be label as a misconception. It is important to discuss and must resolve immediately because it can lead to the problems in students’ understanding. The purpose of this research is to describe students’ misconception about the meaning of equal sign on equal matrices. Descriptive method was used in this study involving five students of Senior High School in Boyolali who were taking Equal Matrices course. The result of this study shows that all of the students had the misconception about the meaning of the equal sign. They interpret the equal sign as an operational symbol rather than a symbol of mathematical equivalence. Students merely solve the problem only single way, which is a computational method, so that students stuck in a monotonous way of thinking and unable to develop their creativity.

  15. Parallel fast multipole boundary element method applied to computational homogenization

    NASA Astrophysics Data System (ADS)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  16. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.

    PubMed

    Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron

    2018-03-28

    All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.

  17. Fiber Bragg Grating Based System for Temperature Measurements

    NASA Astrophysics Data System (ADS)

    Tahir, Bashir Ahmed; Ali, Jalil; Abdul Rahman, Rosly

    In this study, a fiber Bragg grating sensor for temperature measurement is proposed and experimentally demonstrated. In particular, we point out that the method is well-suited for monitoring temperature because they are able to withstand a high temperature environment, where standard thermocouple methods fail. The interrogation technologies of the sensor systems are all simple, low cost and effective as well. In the sensor system, fiber grating was dipped into a water beaker that was placed on a hotplate to control the temperature of water. The temperature was raised in equal increments. The sensing principle is based on tracking of Bragg wavelength shifts caused by the temperature change. So the temperature is measured based on the wavelength-shifts of the FBG induced by the heating water. The fiber grating is high temperature stable excimer-laser-induced grating and has a linear function of wavelength-temperature in the range of 0-285°C. A dynamic range of 0-285°C and a sensitivity of 0.0131 nm/°C almost equal to that of general FBG have been obtained by this sensor system. Furthermore, the correlation of theoretical analysis and experimental results show the capability and feasibility of the purposed technique.

  18. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles.

    PubMed

    Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen

    2017-01-16

    Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.

  19. A new scheme of general hybrid projective complete dislocated synchronization

    NASA Astrophysics Data System (ADS)

    Chu, Yan-dong; Chang, Ying-Xiang; An, Xin-lei; Yu, Jian-Ning; Zhang, Jian-Gang

    2011-03-01

    Based on the Lyapunov stability theorem, a new type of chaos synchronization, general hybrid projective complete dislocated synchronization (GHPCDS), is proposed under the framework of drive-response systems. The difference between the GHPCDS and complete synchronization is that every state variable of drive system does not equal the corresponding state variable, but equal other ones of response system while evolving in time. The GHPCDS includes complete dislocated synchronization, dislocated anti-synchronization and projective dislocated synchronization as its special item. As examples, the Lorenz chaotic system, Rössler chaotic system, hyperchaotic Chen system and hyperchaotic Lü system are discussed. Numerical simulations are given to show the effectiveness of these methods.

  20. Noise Equalization for Ultrafast Plane Wave Microvessel Imaging.

    PubMed

    Song, Pengfei; Manduca, Armando; Trzasko, Joshua D; Chen, Shigao

    2017-11-01

    Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enable more robust clutter filtering based on singular value decomposition. However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This paper was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation induced) and microvessel blood flow signal and 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality.

  1. Equality in Sport for Women.

    ERIC Educational Resources Information Center

    Geadelmann, Patricia L.; And Others

    Essays concerning multiple aspects of integrating the concept of professional equality between the sexes into the field of sport are presented. The abstract idea of sexual equality is examined, and methods for determining the degree of equality present in given working situations are set forth. A discussion of the laws, enforcing agencies, and…

  2. Multigrid contact detection method

    NASA Astrophysics Data System (ADS)

    He, Kejing; Dong, Shoubin; Zhou, Zhaoyao

    2007-03-01

    Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.

  3. Voltage equalization of an ultracapacitor module by cell grouping using number partitioning algorithm

    NASA Astrophysics Data System (ADS)

    Oyarbide, E.; Bernal, C.; Molina, P.; Jiménez, L. A.; Gálvez, R.; Martínez, A.

    2016-01-01

    Ultracapacitors are low voltage devices and therefore, for practical applications, they need to be used in modules of series-connected cells. Because of the inherent manufacturing tolerance of the capacitance parameter of each cell, and as the maximum voltage value cannot be exceeded, the module requires inter-cell voltage equalization. If the intended application suffers repeated fast charging/discharging cycles, active equalization circuits must be rated to full power, and thus the module becomes expensive. Previous work shows that a series connection of several sets of paralleled ultracapacitors minimizes the dispersion of equivalent capacitance values, and also the voltage differences between capacitors. Thus the overall life expectancy is improved. This paper proposes a method to distribute ultracapacitors with a number partitioning-based strategy to reduce the dispersion between equivalent submodule capacitances. Thereafter, the total amount of stored energy and/or the life expectancy of the device can be considerably improved.

  4. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  5. Middle-School Students' Understanding of the Equal Sign: The Books They Read Can't Help

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Grandau, Laura; Knuth, Eric J.; Alibali, Martha W.; Stephens, Ana C.; Hattikudur, Shanta; Krill, Daniel E.

    2006-01-01

    This study examined how 4 middle school textbook series (2 skills-based, 2 Standards-based) present equal signs. Equal signs were often presented in standard operations equals answer contexts (e.g., 3 + 4 = 7) and were rarely presented in nonstandard operations on both sides contexts (e.g., 3 + 4 = 5 + 2). They were, however, presented in other…

  6. MO-DE-207A-02: A Feature-Preserving Image Reconstruction Method for Improved Pancreaticlesion Classification in Diagnostic CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Tsui, B; Noo, F

    Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less

  7. Rapid prediction of chemical metabolism by human UDP-glucuronosyltransferase isoforms using quantum chemical descriptors derived with the electronegativity equalization method.

    PubMed

    Sorich, Michael J; McKinnon, Ross A; Miners, John O; Winkler, David A; Smith, Paul A

    2004-10-07

    This study aimed to evaluate in silico models based on quantum chemical (QC) descriptors derived using the electronegativity equalization method (EEM) and to assess the use of QC properties to predict chemical metabolism by human UDP-glucuronosyltransferase (UGT) isoforms. Various EEM-derived QC molecular descriptors were calculated for known UGT substrates and nonsubstrates. Classification models were developed using support vector machine and partial least squares discriminant analysis. In general, the most predictive models were generated with the support vector machine. Combining QC and 2D descriptors (from previous work) using a consensus approach resulted in a statistically significant improvement in predictivity (to 84%) over both the QC and 2D models and the other methods of combining the descriptors. EEM-derived QC descriptors were shown to be both highly predictive and computationally efficient. It is likely that EEM-derived QC properties will be generally useful for predicting ADMET and physicochemical properties during drug discovery.

  8. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    PubMed

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-10-01

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3.3] vs 4.5 [4.9], P = .11) increased nonsignificantly in both groups. This study supports the use of a simulation-based method as a supplementary tool to the lecture-based method in the assessment and teaching of fundoscopic examination in neurology residency.

  9. Performance analysis of adaptive equalization for coherent acoustic communications in the time-varying ocean environment.

    PubMed

    Preisig, James C

    2005-07-01

    Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

  10. Financing Rural and Small Schools: Issues of Adequacy and Equity.

    ERIC Educational Resources Information Center

    Honeyman, David S.; And Others

    This monograph investigates issues related to the financial support of rural schools. The first section describes various state formulas and the methods used to distribute funds to rural schools. It considers questions about the adequacy of funding adjustments based on sparsity and the relationship of such adjustments to equal educational…

  11. Computing the Envelope for Stepwise Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.

  12. 77 FR 43498 - Federal Sector Equal Employment Opportunity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-25

    ... on the basis of age; the Equal Pay Act of 1963, which prohibits sex-based wage discrimination; and..., Age discrimination, Equal employment opportunity, Government employees, Individuals with disabilities... EQUAL EMPLOYMENT OPPORTUNITY COMMISSION 29 CFR Part 1614 RIN Number 3046-AA73 Federal Sector Equal...

  13. Progress in multirate digital control system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1991-01-01

    A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.

  14. Method for optimizing channelized quadratic observers for binary classification of large-dimensional image datasets

    PubMed Central

    Kupinski, M. K.; Clarkson, E.

    2015-01-01

    We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764

  15. Using small-area variations to inform health care service planning: what do we 'need' to know?

    PubMed

    Mercuri, Mathew; Birch, Stephen; Gafni, Amiram

    2013-12-01

    Allocating resources on the basis of population need is a health care policy goal in many countries. Thus, resources must be allocated in accordance with need if stakeholders are to achieve policy goals. Small area methods have been presented as a means for revealing important information that can assist stakeholders in meeting policy goals. The purpose of this review is to examine the extent to which small area methods provide information relevant to meeting the goals of a needs-based health care policy. We present a conceptual framework explaining the terms 'demand', 'need', 'use' and 'supply', as commonly used in the literature. We critically review the literature on small area methods through the lens of this framework. 'Use' cannot be used as a proxy or surrogate of 'need'. Thus, if the goal of health care policy is to provide equal access for equal need, then traditional small area methods are inadequate because they measure small area variations in use of services in different populations, independent of the levels of need in those populations. Small area methods can be modified by incorporating direct measures of relative population need from population health surveys or by adjusting population size for levels of health risks in populations such as the prevalence of smoking and low birth weight. This might improve what can be learned from studies employing small area methods if they are to inform needs-based health care policies. © 2013 John Wiley & Sons Ltd.

  16. A complex valued radial basis function network for equalization of fast time varying channels.

    PubMed

    Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R

    1999-01-01

    This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.

  17. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups.

    PubMed

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2016-02-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the source of avoided electricity in case of waste incineration. The results for the PS cup, which are less dominated by production of virgin material than aluminium can, furthermore depend on the environmental impact categories. This stresses the importance to consider other impact categories besides the most commonly used global warming impact. The multitude of available methods complicates the choice of an appropriate method for the LCA practitioner. New guidelines keep appearing and industries also suggest their own preferred method. Unambiguous ISO guidelines, particularly related to sensitivity analysis, would be a great step forward in making more robust LCAs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Direct Optimal Control of Duffing Dynamics

    NASA Technical Reports Server (NTRS)

    Oz, Hayrani; Ramsey, John K.

    2002-01-01

    The "direct control method" is a novel concept that is an attractive alternative and competitor to the differential-equation-based methods. The direct method is equally well applicable to nonlinear, linear, time-varying, and time-invariant systems. For all such systems, the method yields explicit closed-form control laws based on minimization of a quadratic control performance measure. We present an application of the direct method to the dynamics and optimal control of the Duffing system where the control performance measure is not restricted to a quadratic form and hence may include a quartic energy term. The results we present in this report also constitute further generalizations of our earlier work in "direct optimal control methodology." The approach is demonstrated for the optimal control of the Duffing equation with a softening nonlinear stiffness.

  19. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  20. Protein Equalizer Technology : the quest for a "democratic proteome".

    PubMed

    Righetti, Pier Giorgio; Boschetti, Egisto; Lomas, Lee; Citterio, Attilio

    2006-07-01

    No proteome can be considered "democratic", but rather "oligarchic", since a few proteins dominate the landscape and often obliterate the signal of the rare ones. This is the reason why most scientists lament that, in proteome analysis, the same set of abundant proteins is seen again and again. A host of pre-fractionation techniques have been described, but all of them, one way or another, are besieged by problems, in that they are based on a "depletion principle", i.e. getting rid of the unwanted species. Yet "democracy" calls not for killing the enemy, but for giving "equal rights" to all people. One way to achieve that would be the use of "Protein Equalizer Technology" for reducing protein concentration differences. This comprises a diverse library of combinatorial ligands coupled to spherical porous beads. When these beads come into contact with complex proteomes (e.g. human urine and serum, egg white, and any cell lysate, for that matter) of widely differing protein composition and relative abundances, they are able to "equalize" the protein population, by sharply reducing the concentration of the most abundant components, while simultaneously enhancing the concentration of the most dilute species. It is felt that this novel method could offer a strong step forward in bringing the "unseen proteome" (due to either low abundance and/or presence of interference) within the detection capabilities of current proteomics detection methods. Examples are given of equalization of human urine and serum samples, resulting in the discovery of a host of proteins never reported before. Additionally, these beads can be used to remove host cell proteins from purified recombinant proteins or protein purified from natural sources that are intended for human consumption. These proteins typically reach purities of the order of 98%: higher purities often become prohibitively expensive. Yet, if incubated with "equalizer beads", these last impurities can be effectively removed at a small cost and with minute losses of the main, valuable product.

  1. Validation of space-based polarization measurements by use of a single-scattering approximation, with application to the global ozone monitoring experiment.

    PubMed

    Aben, Ilse; Tanzi, Cristina P; Hartmann, Wouter; Stam, Daphne M; Stammes, Piet

    2003-06-20

    A method is presented for in-flight validation of space-based polarization measurements based on approximation of the direction of polarization of scattered sunlight by the Rayleigh single-scattering value. This approximation is verified by simulations of radiative transfer calculations for various atmospheric conditions. The simulations show locations along an orbit where the scattering geometries are such that the intensities of the parallel and orthogonal polarization components of the light are equal, regardless of the observed atmosphere and surface. The method can be applied to any space-based instrument that measures the polarization of reflected solar light. We successfully applied the method to validate the Global Ozone Monitoring Experiment (GOME) polarization measurements. The error in the GOME's three broadband polarization measurements appears to be approximately 1%.

  2. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  3. Methods of making alkyl esters

    DOEpatents

    Elliott, Brian

    2010-08-03

    A method comprising contacting an alcohol, a feed comprising one or more glycerides and equal to or greater than 2 wt % of one or more free fatty acids, and a solid acid catalyst, a nanostructured polymer catalyst, or a sulfated zirconia catalyst in one or more reactors, and recovering from the one or more reactors an effluent comprising equal to or greater than about 75 wt % alkyl ester and equal to or less than about 5 wt % glyceride.

  4. Equality in the Workplace. An Equal Opportunities Handbook for Trainers. Human Resource Management in Action Series.

    ERIC Educational Resources Information Center

    Collins, Helen

    This workbook, which is intended as a practical guide for human resource managers, trainers, and others concerned with developing and implementing equal opportunities training programs in British workplaces, examines issues in and methods for equal opportunities training. The introduction gives an overview of current training trends and issues.…

  5. Promoting an Equality Agenda in Adult Literacy Practice Using Non-Text/Creative Methodologies

    ERIC Educational Resources Information Center

    Mark, Rob

    2007-01-01

    This paper examines the relationship between literacy, equality and creativity and the relevance for adult literacy practices. It looks in particular at how literacy tutors can use creative non-text methods to promote an understanding of equality in learners' lives. Through an examination of the findings from the Literacy and Equality in Irish…

  6. Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry

    NASA Astrophysics Data System (ADS)

    Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael

    2015-05-01

    This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.

  7. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    PubMed

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p < 0.01). In comparison with equal term weighting, the performance was significantly improved by specific term weighting (p < 0.01) but not by generic term weighting. The clinical terms "Dysplastic nodule", "nodule of liver" and "equal density (isodense) lesion" were found the top three image findings associated with HCC in PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  8. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  9. Gender equality in couples and self-rated health - A survey study evaluating measurements of gender equality and its impact on health

    PubMed Central

    2011-01-01

    Background Men and women have different patterns of health. These differences between the sexes present a challenge to the field of public health. The question why women experience more health problems than men despite their longevity has been discussed extensively, with both social and biological theories being offered as plausible explanations. In this article, we focus on how gender equality in a partnership might be associated with the respondents' perceptions of health. Methods This study was a cross-sectional survey with 1400 respondents. We measured gender equality using two different measures: 1) a self-reported gender equality index, and 2) a self-perceived gender equality question. The aim of comparison of the self-reported gender equality index with the self-perceived gender equality question was to reveal possible disagreements between the normative discourse on gender equality and daily practice in couple relationships. We then evaluated the association with health, measured as self-rated health (SRH). With SRH dichotomized into 'good' and 'poor', logistic regression was used to assess factors associated with the outcome. For the comparison between the self-reported gender equality index and self-perceived gender equality, kappa statistics were used. Results Associations between gender equality and health found in this study vary with the type of gender equality measurement. Overall, we found little agreement between the self-reported gender equality index and self-perceived gender equality. Further, the patterns of agreement between self-perceived and self-reported gender equality were quite different for men and women: men perceived greater gender equality than they reported in the index, while women perceived less gender equality than they reported. The associations to health were depending on gender equality measurement used. Conclusions Men and women perceive and report gender equality differently. This means that it is necessary not only to be conscious of the methods and measurements used to quantify men's and women's opinions of gender equality, but also to be aware of the implications for health outcomes. PMID:21871087

  10. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  11. 29 CFR 1620.25 - Equalization of rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Equalization of rates. 1620.25 Section 1620.25 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.25 Equalization of rates. Under the express terms of the EPA, when a prohibited sex-based wage differential has...

  12. Equality of Education and Citizenship: Challenges of European Integration

    ERIC Educational Resources Information Center

    Follesdal, Andreas

    2008-01-01

    What kind of equality among Europeans does equal citizenship require, especially regarding education? In particular, is there good reason to insist of equality of education among Europeans--and if so, equality of what? To what extent should the same knowledge base and citizenship norms be taught across state borders and religious and other…

  13. Thermal degradation of ternary blend films containing PVA/chitosan/vanillin

    NASA Astrophysics Data System (ADS)

    Kasai, Deepak; Chougale, Ravindra; Masti, Saraswati; Narasgoudar, Shivayogi

    2018-05-01

    The ternary chitosan/poly (vinyl alcohol)/vanillin blend films were prepared by solution casting method. The influence of equal weight percent of poly (vinyl alcohol) and vanillin on thermal stability of the chitosan blend films were investigated by using thermogravimetric analysis (TGA). The kinetic parameters such as enthalpy (ΔH*), entropy (ΔS*), and Gibbs free energy (ΔG*) in the first and second decomposition steps based on the thermogravimetric data were calculated. The thermal stabilities of the blend films were confirmed by thermodynamic parameters obtained in the activation energies, which indicated that increase in the equal weight percent of PVA/vanillin decreased the thermal stability of the chitosan film.

  14. The principles of quality-associated costing: derivation from clinical transfusion practice.

    PubMed

    Trenchard, P M; Dixon, R

    1997-01-01

    As clinical transfusion practice works towards achieving cost-effectiveness, prescribers of blood and its derivatives must be certain that the prices of such products are based on real manufacturing costs and not market forces. Using clinical cost-benefit analysis as the context for the costing and pricing of blood products, this article identifies the following two principles: (1) the product price must equal the product cost (the "price = cost" rule) and (2) the product cost must equal the real cost of product manufacture. In addition, the article describes a new method of blood product costing, quality-associated costing (QAC), that will enable valid cost-benefit analysis of blood products.

  15. Letters: Noise Equalization for Ultrafast Plane Wave Microvessel Imaging

    PubMed Central

    Song, Pengfei; Manduca, Armando; Trzasko, Joshua D.

    2017-01-01

    Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enables more robust clutter filtering based on singular value decomposition (SVD). However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This study was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation-induced) and microvessel blood flow signal; 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality. PMID:28880169

  16. [Risk factors for the spine: nursing assessment and care].

    PubMed

    Bringuente, M E; de Castro, I S; de Jesus, J C; Luciano, L dos S

    1997-01-01

    The present work aimed at studying risk factor that affect people with back pain, identifying them and implementing an intervention proposal of a health education program based on self-care teaching, existential humanist philosophical projects and stress equalization approach line, skeletal-muscle reintegration activities, basic techniques on stress equalization and massage. It has been developed for a population of 42 (forty-two) clients. Two instruments which integrate nursing consultation protocol have been used in data collection. The results showed the existence of associated risk factors which are changeable according to health education programs. The assessment process has contributed for therapeutic measures focus, using non-conventional care methods for this approach providing an improvement to these clients life quality.

  17. A motion deblurring method with long/short exposure image pairs

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Hua, Weiping; Zhao, Jufeng; Gong, Xiaoli; Zhu, Liyao

    2018-01-01

    In this paper, a motion deblurring method with long/short exposure image pairs is presented. The long/short exposure image pairs are captured for the same scene under different exposure time. The image pairs are treated as the input of the deblurring method and more information could be used to obtain a deblurring result with high image quality. Firstly, the luminance equalization process is carried out to the short exposure image. And the blur kernel is estimated with the image pair under the maximum a posteriori (MAP) framework using conjugate gradient algorithm. Then a L0 image smoothing based denoising method is applied to the luminance equalized image. And the final deblurring result is obtained with the gain controlled residual image deconvolution process with the edge map as the gain map. Furthermore, a real experimental optical system is built to capture the image pair in order to demonstrate the effectiveness of the proposed deblurring framework. The long/short image pairs are obtained under different exposure time and camera gain control. Experimental results show that the proposed method could provide a superior deblurring result in both subjective and objective assessment compared with other deblurring approaches.

  18. Americans misperceive racial economic equality.

    PubMed

    Kraus, Michael W; Rucker, Julian M; Richeson, Jennifer A

    2017-09-26

    The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies ( n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black-White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites' estimates of Black-White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality-a misperception that is likely to have any number of important policy implications.

  19. On-line and real-time diagnosis method for proton membrane fuel cell (PEMFC) stack by the superposition principle

    NASA Astrophysics Data System (ADS)

    Lee, Young-Hyun; Kim, Jonghyeon; Yoo, Seungyeol

    2016-09-01

    The critical cell voltage drop in a stack can be followed by stack defect. A method of detecting defective cell is the cell voltage monitoring. The other methods are based on the nonlinear frequency response. In this paper, the superposition principle for the diagnosis of PEMFC stack is introduced. If critical cell voltage drops exist, the stack behaves as a nonlinear system. This nonlinearity can explicitly appear in the ohmic overpotential region of a voltage-current curve. To detect the critical cell voltage drop, a stack is excited by two input direct test-currents which have smaller amplitude than an operating stack current and have an equal distance value from the operating current. If the difference between one voltage excited by a test current and the voltage excited by a load current is not equal to the difference between the other voltage response and the voltage excited by the load current, the stack system acts as a nonlinear system. This means that there is a critical cell voltage drop. The deviation from the value zero of the difference reflects the grade of the system nonlinearity. A simulation model for the stack diagnosis is developed based on the SPP, and experimentally validated.

  20. Spatially Correlated Sparse MIMO Channel Path Delay Estimation in Scattering Environments Based on Signal Subspace Tracking

    PubMed Central

    Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua

    2018-01-01

    A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797

  1. Pesticide data for selected Wyoming streams, 1976-78

    USGS Publications Warehouse

    Butler, David L.

    1987-01-01

    In 1976, the U.S. Geological Survey, in cooperation with the Wyoming Department of Agriculture, started a monitoring program to determine pesticide concentrations in Wyoming streams. This program was incorporated into the water-quality data-collection system already in operation. Samples were collected at 20 sites for analysis of various insecticides, herbicides, polychlorinated biphenyls, and polychlorinated napthalenes.\\The results through 1978 revealed small concentrations of pesticides in water and bottom-material samples were DDE (39 percent of the concentrations equal to or greater than the minimum reported concentrations of the analytical methods), DDD (20 percent), dieldrin (21 percent), and polychlorinated biphenyls (29 percent). The herbicides most commonly found in water samples were 2,4-D (29 percent of the concentrations equal to or greater than the minimum reported concentrations of the analytical method) and picloram (23 percent). Most concentrations were significantly less than concentrations thought to be harmful to freshwater aquatic life based on available toxicity data. However for some pesticides, U.S. Environmental Protection Agency water-quality criteria for freshwater aquatic life are based on bioaccumulation factors that result in criteria concentrations less than the minimum reported concentrations of the analytical methods. It is not known if certain pesticides were present at concentrations less than the minimum reported concentrations that exceeded these criteria.

  2. Spatially Correlated Sparse MIMO Channel Path Delay Estimation in Scattering Environments Based on Signal Subspace Tracking.

    PubMed

    Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua

    2018-05-06

    A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.

  3. The covariance matrix for the solution vector of an equality-constrained least-squares problem

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1976-01-01

    Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'

  4. Method for the simultaneous preparation of radon-211, xenon-125, xenon-123, astatine-211, iodine-125 and iodine-123

    DOEpatents

    Mirzadeh, S.; Lambrecht, R.M.

    1985-07-01

    The invention relates to a practical method for commercially producing radiopharmaceutical activities and, more particularly, relates to a method for the preparation of about equal amount of Radon-211 (/sup 211/Rn) and Xenon-125 (/sup 125/Xe) including a one-step chemical procedure following an irradiation procedure in which a selected target of Thorium (/sup 232/Th) or Uranium (/sup 238/U) is irradiated. The disclosed method is also effective for the preparation in a one-step chemical procedure of substantially equal amounts of high purity /sup 123/I and /sup 211/At. In one preferred arrangement of the invention almost equal quantities of /sup 211/Rn and /sup 125/Xe are prepared using a onestep chemical procedure in which a suitably irradiated fertile target material, such as thorium-232 or uranium-238, is treated to extract those radionuclides from it. In the same one-step chemical procedure about equal quantities of /sup 211/At and /sup 123/I are prepared and stored for subsequent use. In a modified arrangement of the method of the invention, it is practiced to separate and store about equal amounts of only /sup 211/Rn and /sup 125/Xe, while preventing the extraction or storage of the radionuclides /sup 211/At and /sup 123/I.

  5. Radiative lifetimes and transition probabilities for electric-dipole delta n equals zero transitions in highly stripped sulfur ions

    NASA Technical Reports Server (NTRS)

    Pegg, D. J.; Elston, S. B.; Griffin, P. M.; Forester, J. P.; Thoe, R. S.; Peterson, R. S.; Sellin, I. A.; Hayden, H. C.

    1976-01-01

    The beam-foil time-of-flight method has been used to investigate radiative lifetimes and transition rates involving allowed intrashell transitions within the L shell of highly ionized sulfur. The results for these transitions, which can be particularly correlation-sensitive, are compared with current calculations based upon multiconfigurational models.

  6. Incidence, Type and Intensity of Abuse in Street Children in India

    ERIC Educational Resources Information Center

    Mathur, Meena; Rathore, Prachi; Mathur, Monika

    2009-01-01

    Objective: The aims of this cross-sectional survey were to examine the prevalence, type and intensity of abuse in street children in Jaipur city, India. Method: Based on purposive random sampling, 200 street children, inclusive of equal number of boys and girls, were selected from the streets of Jaipur city, India, and administered an in-depth…

  7. Method of preparing thin film polymeric gel electrolytes

    DOEpatents

    Derzon, D.K.; Arnold, C. Jr.

    1997-11-25

    Novel hybrid thin film electrolyte is described, based on an organonitrile solvent system, which are compositionally stable, environmentally safe, can be produced efficiently in large quantity and which, because of their high conductivities {approx_equal}10{sup {minus}3}{Omega}{sup {minus}1}cm{sup {minus}1} are useful as electrolytes for rechargeable lithium batteries. 1 fig.

  8. Comparison of interpretation methods of thermocouple psychrometer readouts

    NASA Astrophysics Data System (ADS)

    Guz, Łukasz; Majerek, Dariusz; Sobczuk, Henryk; Guz, Ewa; Połednik, Bernard

    2017-07-01

    Thermocouple psychrometers allow to determine the water potential, which can be easily recalculated into relative humidity of air in cavity of porous materials. The available typical measuring range of probe is very narrow. The lower limit of water potential measurements is about -200 kPa. On the other hand, the upper limit is approximately equal to -7000 kPa and depends on many factors. These paper presents a comparison of two interpretation methods of thermocouple microvolt output regarding: i) amplitude of voltage during wet-bulb temperature depression, ii) field under microvolt output curve. Previous results of experiments indicate that there is a robust correlation between water potential and field under microvolt output curve. In order to obtain correct results of water potential, each probe should be calibrated. The range of NaCl salt solutions with molality from 0.75M to 2.25M was used for calibration, which enable to obtain the osmotic potential from -3377 kPa to -10865 kPa. During measurements was applied 5mA heating current over a span 5 s and 5 mA cooling current aver a span 30s. The conducted study proves that using only different interpretation method based on field under microvolt output it is possible to achieve about 1000 kPa wider range of water potential. The average relative mean square error (RMSE) of this interpretation method is 1199 kPa while voltage amplitude based method yields average RMSE equaling 1378 kPa during calibration in temperature not stabilized conditions.

  9. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  10. Blind I/Q imbalance and nonlinear ISI mitigation in Nyquist-SCM direct detection system with cascaded widely linear and Volterra equalizer

    NASA Astrophysics Data System (ADS)

    Liu, Na; Ju, Cheng

    2018-02-01

    Nyquist-SCM signal after fiber transmission, direct detection (DD), and analog down-conversion suffers from linear ISI, nonlinear ISI, and I/Q imbalance, simultaneously. Theoretical analysis based on widely linear (WL) and Volterra series is given to explain the relationship and interaction of these three interferences. A blind equalization algorithm, cascaded WL and Volterra equalizer, is designed to mitigate these three interferences. Furthermore, the feasibility of the proposed cascaded algorithm is experimentally demonstrated based on a 40-Gbps data rate 16-quadrature amplitude modulation (QAM) virtual single sideband (VSSB) Nyquist-SCM DD system over 100-km standard single mode fiber (SSMF) transmission. In addition, the performances of conventional strictly linear equalizer, WL equalizer, Volterra equalizer, and cascaded WL and Volterra equalizer are experimentally evaluated, respectively.

  11. 40 CFR 1054.740 - What special provisions apply for generating and using emission credits?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Calculate the value of transitional emission credits as described in § 1054.705, based on setting STD equal... enduring credits as described in § 1054.705, based on setting STD equal to 10.0 g/kW-hr and FEL to the... transitional emission credits as described in § 1054.705, based on setting STD equal to 11.0 g/kW-hr and FEL...

  12. Diagnosis of cervical cells based on fractal and Euclidian geometrical measurements: Intrinsic Geometric Cellular Organization

    PubMed Central

    2014-01-01

    Background Fractal geometry has been the basis for the development of a diagnosis of preneoplastic and neoplastic cells that clears up the undetermination of the atypical squamous cells of undetermined significance (ASCUS). Methods Pictures of 40 cervix cytology samples diagnosed with conventional parameters were taken. A blind study was developed in which the clinic diagnosis of 10 normal cells, 10 ASCUS, 10 L-SIL and 10 H-SIL was masked. Cellular nucleus and cytoplasm were evaluated in the generalized Box-Counting space, calculating the fractal dimension and number of spaces occupied by the frontier of each object. Further, number of pixels occupied by surface of each object was calculated. Later, the mathematical features of the measures were studied to establish differences or equalities useful for diagnostic application. Finally, the sensibility, specificity, negative likelihood ratio and diagnostic concordance with Kappa coefficient were calculated. Results Simultaneous measures of the nuclear surface and the subtraction between the boundaries of cytoplasm and nucleus, lead to differentiate normality, L-SIL and H-SIL. Normality shows values less than or equal to 735 in nucleus surface and values greater or equal to 161 in cytoplasm-nucleus subtraction. L-SIL cells exhibit a nucleus surface with values greater than or equal to 972 and a subtraction between nucleus-cytoplasm higher to 130. L-SIL cells show cytoplasm-nucleus values less than 120. The rank between 120–130 in cytoplasm-nucleus subtraction corresponds to evolution between L-SIL and H-SIL. Sensibility and specificity values were 100%, the negative likelihood ratio was zero and Kappa coefficient was equal to 1. Conclusions A new diagnostic methodology of clinic applicability was developed based on fractal and euclidean geometry, which is useful for evaluation of cervix cytology. PMID:24742118

  13. Tax-Based Educational Equity: A New Approach to School Finance Reform.

    ERIC Educational Resources Information Center

    Cooper, Bruce S.; And Others

    A new argument is made for school finance equalization, based not on "equal protection" or "equal educational opportunity," but on constitutional requirements for tax equity in New Hampshire. Since inequalities in school finance are a taxation problem, they call for tax reform. The analyses rest on four points: (1) that…

  14. A controlled phantom study of a noise equalization algorithm for detecting microcalcifications in digital mammograms.

    PubMed

    Gürün, O O; Fatouros, P P; Kuhn, G M; de Paredes, E S

    2001-04-01

    We report on some extensions and further developments of a well-known microcalcification detection algorithm based on adaptive noise equalization. Tissue equivalent phantom images with and without labeled microcalcifications were subjected to this algorithm, and analyses of results revealed some shortcomings in the approach. Particularly, it was observed that the method of estimating the width of distributions in the feature space was based on assumptions which resulted in the loss of similarity preservation characteristics. A modification involving a change of estimator statistic was made, and the modified approach was tested on the same phantom images. Other modifications for improving detectability such as downsampling and use of alternate local contrast filters were also tested. The results indicate that these modifications yield improvements in detectability, while extending the generality of the approach. Extensions to real mammograms and further directions of research are discussed.

  15. Model atmospheres for M (sub)dwarf stars. 1: The base model grid

    NASA Technical Reports Server (NTRS)

    Allard, France; Hauschildt, Peter H.

    1995-01-01

    We have calculated a grid of more than 700 model atmospheres valid for a wide range of parameters encompassing the coolest known M dwarfs, M subdwarfs, and brown dwarf candidates: 1500 less than or equal to T(sub eff) less than or equal to 4000 K, 3.5 less than or equal to log g less than or equal to 5.5, and -4.0 less than or equal to (M/H) less than or equal to +0.5. Our equation of state includes 105 molecules and up to 27 ionization stages of 39 elements. In the calculations of the base grid of model atmospheres presented here, we include over 300 molecular bands of four molecules (TiO, VO, CaH, FeH) in the JOLA approximation, the water opacity of Ludwig (1971), collision-induced opacities, b-f and f-f atomic processes, as well as about 2 million spectral lines selected from a list with more than 42 million atomic and 24 million molecular (H2, CH, NH, OH, MgH, SiH, C2, CN, CO, SiO) lines. High-resolution synthetic spectra are obtained using an opacity sampling method. The model atmospheres and spectra are calculated with the generalized stellar atmosphere code PHOENIX, assuming LTE, plane-parallel geometry, energy (radiative plus convective) conservation, and hydrostatic equilibrium. The model spectra give close agreement with observations of M dwarfs across a wide spectral range from the blue to the near-IR, with one notable exception: the fit to the water bands. We discuss several practical applications of our model grid, e.g., broadband colors derived from the synthetic spectra. In light of current efforts to identify genuine brown dwarfs, we also show how low-resolution spectra of cool dwarfs vary with surface gravity, and how the high-regulation line profile of the Li I resonance doublet depends on the Li abundance.

  16. Linear methods for reducing EMG contamination in peripheral nerve motor decodes.

    PubMed

    Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J

    2016-08-01

    Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.

  17. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  18. Study of lithium cation in water clusters: based on atom-bond electronegativity equalization method fused into molecular mechanics.

    PubMed

    Li, Xin; Yang, Zhong-Zhi

    2005-05-12

    We present a potential model for Li(+)-water clusters based on a combination of the atom-bond electronegativity equalization and molecular mechanics (ABEEM/MM) that is to take ABEEM charges of the cation and all atoms, bonds, and lone pairs of water molecules into the intermolecular electrostatic interaction term in molecular mechanics. The model allows point charges on cationic site and seven sites of an ABEEM-7P water molecule to fluctuate responding to the cluster geometry. The water molecules in the first sphere of Li(+) are strongly structured and there is obvious charge transfer between the cation and the water molecules; therefore, the charge constraint on the ionic cluster includes the charged constraint on the Li(+) and the first-shell water molecules and the charge neutrality constraint on each water molecule in the external hydration shells. The newly constructed potential model based on ABEEM/MM is first applied to ionic clusters and reproduces gas-phase state properties of Li(+)(H(2)O)(n) (n = 1-6 and 8) including optimized geometries, ABEEM charges, binding energies, frequencies, and so on, which are in fair agreement with those measured by available experiments and calculated by ab initio methods. Prospects and benefits introduced by this potential model are pointed out.

  19. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles

    PubMed Central

    Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen

    2017-01-01

    Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211

  20. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-07

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  1. PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD

    NASA Astrophysics Data System (ADS)

    Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao

    Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.

  2. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  3. Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio

    2011-12-01

    This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).

  4. 16QAM Blind Equalization via Maximum Entropy Density Approximation Technique and Nonlinear Lagrange Multipliers

    PubMed Central

    Mauda, R.; Pinchas, M.

    2014-01-01

    Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813

  5. EQUALS Investigations: Remote Rulers.

    ERIC Educational Resources Information Center

    Mayfield, Karen; Whitlow, Robert

    EQUALS is a teacher education program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. It supports a problem-solving approach to mathematics which has students working in groups, uses active assessment methods, and incorporates a broad mathematics curriculum…

  6. Early Understanding of Equality

    ERIC Educational Resources Information Center

    Leavy, Aisling; Hourigan, Mairéad; McMahon, Áine

    2013-01-01

    Quite a bit of the arithmetic in elementary school contains elements of algebraic reasoning. After researching and testing a number of instructional strategies with Irish third graders, these authors found effective methods for cultivating a relational concept of equality in third-grade students. Understanding equality is fundamental to algebraic…

  7. Equalizer system and method for series connected energy storing devices

    DOEpatents

    Rouillard, Jean; Comte, Christophe; Hagen, Ronald A.; Knudson, Orlin B.; Morin, Andre; Ross, Guy

    1999-01-01

    An apparatus and method for regulating the charge voltage of a number of electrochemical cells connected in series is disclosed. Equalization circuitry is provided to control the amount of charge current supplied to individual electrochemical cells included within the series string of electrochemical cells without interrupting the flow of charge current through the series string. The equalization circuitry balances the potential of each of the electrochemical cells to within a pre-determined voltage setpoint tolerance during charging, and, if necessary, prior to initiating charging. Equalization of cell potentials may be effected toward the end of a charge cycle or throughout the charge cycle. Overcharge protection is also provided for each of the electrochemical cells coupled to the series connection. During a discharge mode of operation in accordance with one embodiment, the equalization circuitry is substantially non-conductive with respect to the flow of discharge current from the series string of electrochemical cells. In accordance with another embodiment, equalization of the series string of cells is effected during a discharge cycle.

  8. Independent component analysis based digital signal processing in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi

    2018-02-01

    In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.

  9. A Place-Oriented, Mixed-Level Regionalization Method for Constructing Geographic Areas in Health Data Dissemination and Analysis

    PubMed Central

    Mu, Lan; Wang, Fahui; Chen, Vivien W.; Wu, Xiao-Cheng

    2015-01-01

    Similar geographic areas often have great variations in population size. In health data management and analysis, it is desirable to obtain regions of comparable population by decomposing areas of large population (to gain more spatial variability) and merging areas of small population (to mask privacy of data). Based on the Peano curve algorithm and modified scale-space clustering, this research proposes a mixed-level regionalization (MLR) method to construct geographic areas with comparable population. The method accounts for spatial connectivity and compactness, attributive homogeneity, and exogenous criteria such as minimum (and approximately equal) population or disease counts. A case study using Louisiana cancer data illustrates the MLR method and its strengths and limitations. A major benefit of the method is that most upper level geographic boundaries can be preserved to increase familiarity of constructed areas. Therefore, the MLR method is more human-oriented and place-based than computer-oriented and space-based. PMID:26251551

  10. Improvement of lateral resolution of spectral domain optical coherence tomography images in out-of-focus regions with holographic data processing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moiseev, A A; Gelikonov, G V; Terpelov, D A

    2014-08-31

    An analogy between spectral-domain optical coherence tomography (SD OCT) data and broadband digital holography data is considered. Based on this analogy, a method for processing SD OCT data, which makes it possible to construct images with a lateral resolution in the whole investigated volume equal to the resolution in the in-focus region, is developed. Several issues concerning practical application of the proposed method are discussed. (laser biophotonics)

  11. Americans misperceive racial economic equality

    PubMed Central

    Kraus, Michael W.; Rucker, Julian M.; Richeson, Jennifer A.

    2017-01-01

    The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies (n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black–White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites’ estimates of Black–White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality—a misperception that is likely to have any number of important policy implications. PMID:28923915

  12. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  13. Checking transfer efficiency and equal loading via qualitative optical way in western blotting.

    PubMed

    Gong, Jun-Hua; Gong, Jian-Ping; Zheng, Kai-Wen

    2017-11-01

    The ability to determine that successful transfer and equal loading occur prior to using primary antibodies is important. And total protein staining is commonly used to check transfer efficiency and normalization, which play a crucial role in western blotting. Ponceau S and coomassie blue are commonly used, but there are disadvantages reported in recent years. Therefore, we are interested in finding another method, which is cheap, easy and fast. As we know, protein binding region of PVDF membrane is still hydrophilic when carbinol volatilizes, however, the non-protein binding region of PVDF membrane became hydrophobic again. And this different wettability between non-protein binding region and protein binding region of Polyvinylidene difluoride membrane may be used to check transfer efficiency and equal loading in western blotting. Based on the principle above, we describe an optical approach where an experimenter can observe that the proteins have been transferred to the membrane without any staining within minutes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  15. Performance of Koyna dam based on static and dynamic analysis

    NASA Astrophysics Data System (ADS)

    Azizan, Nik Zainab Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar

    2017-10-01

    This paper discusses the performance of Koyna dam based on static pushover analysis (SPO) and incremental dynamic analysis (IDA). The SPO in this study considered two type of lateral load which is inertial load and hydrodynamic load. The structure was analyse until the damage appears on the structure body. The IDA curves were develop based on 7 ground motion, where the characteristic of the ground motions: i) the distance from the epicenter is less than 15km, (ii) the magnitude is equal to or greater than 5.5 and (iii) the PGA is equal to or greater than 0.15g. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. Elastic respond spectrum developed based on soil type B by using Eurocode 8. By using SPO and IDA method are able to determine the limit states of the dam. The limit state proposed in this study are yielding and ultimate state which is identified base on crack pattern perform on the structure model. The comparison of maximum crest displacement for both methods is analysed to define the limit state of the dam. The displacement of yielding state for Koyna dam is 23.84mm and 44.91mm for the ultimate state. The results are able to be used as a guideline to monitor Koyna dam under seismic loadings which are considering static and dynamic.

  16. EQUALS Investigations: Telling Someone Where To Go.

    ERIC Educational Resources Information Center

    Mayfield, Karen; Whitlow, Robert

    EQUALS is a teacher education program that helps elementary and secondary educators acquire methods and materials to attract minority and female students to mathematics. It supports a problem-solving approach to mathematics which has students working in groups, uses active assessment methods, and incorporates a broad mathematics curriculum…

  17. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang

    2015-12-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  18. Reduction of Racial Disparities in Prostate Cancer

    DTIC Science & Technology

    2005-12-01

    erectile dysfunction , and female sexual dysfunction ). Wherever possible, the questions and scales employed on BACH were selected from published...Methods. A racially and ethnically diverse community-based survey of adults aged 30-79 years in Boston, Massachusetts. The BACH survey has...recruited adults in three racial/ethnic groups: Latino, African American, and White using a stratified cluster sample. The target sample size is equally

  19. Big plans.

    PubMed

    Fitch, Kevin F; Doyle, James F

    2005-09-01

    In Elmhurst Memorial Healthcare's capital planning method: Future replacement costs of assets are estimated by inflating their historical cost over their lives. A balanced model is created initially based on the assumption that rates of revenue growth, inflation, investment income, and interest expense are all equal. Numbers then can be adjusted to account for possible variations, such as excesses or shortages in investment or debt balances.

  20. Bas-relief generation using adaptive histogram equalization.

    PubMed

    Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C

    2009-01-01

    An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

  1. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-02-01

    In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k

  2. Simultaneous quantification of actin monomer and filament dynamics with modeling-assisted analysis of photoactivation

    PubMed Central

    Kapustina, Maryna; Read, Tracy-Ann

    2016-01-01

    ABSTRACT Photoactivation allows one to pulse-label molecules and obtain quantitative data about their behavior. We have devised a new modeling-based analysis for photoactivatable actin experiments that simultaneously measures properties of monomeric and filamentous actin in a three-dimensional cellular environment. We use this method to determine differences in the dynamic behavior of β- and γ-actin isoforms, showing that both inhabit filaments that depolymerize at equal rates but that β-actin exists in a higher monomer-to-filament ratio. We also demonstrate that cofilin (cofilin 1) equally accelerates depolymerization of filaments made from both isoforms, but is only required to maintain the β-actin monomer pool. Finally, we used modeling-based analysis to assess actin dynamics in axon-like projections of differentiating neuroblastoma cells, showing that the actin monomer concentration is significantly depleted as the axon develops. Importantly, these results would not have been obtained using traditional half-time analysis. Given that parameters of the publicly available modeling platform can be adjusted to suit the experimental system of the user, this method can easily be used to quantify actin dynamics in many different cell types and subcellular compartments. PMID:27831495

  3. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  5. 12 CFR 268.103 - Complaints of discrimination covered by this part.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... disability), or the Equal Pay Act (sex-based wage discrimination) shall be processed in accordance with this... for employment. (c) This part does not apply to Equal Pay Act complaints of employees whose services... OF THE FEDERAL RESERVE SYSTEM RULES REGARDING EQUAL OPPORTUNITY Board Program To Promote Equal...

  6. Advanced digital signal processing for short-haul and access network

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Chi, Nan

    2016-02-01

    Digital signal processing (DSP) has been proved to be a successful technology recently in high speed and high spectrum-efficiency optical short-haul and access network, which enables high performances based on digital equalizations and compensations. In this paper, we investigate advanced DSP at the transmitter and receiver side for signal pre-equalization and post-equalization in an optical access network. A novel DSP-based digital and optical pre-equalization scheme has been proposed for bandwidth-limited high speed short-distance communication system, which is based on the feedback of receiver-side adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi-modulus algorithms (CMA, MMA). Based on this scheme, we experimentally demonstrate 400GE on a single optical carrier based on the highest ETDM 120-GBaud PDM-PAM-4 signal, using one external modulator and coherent detection. A line rate of 480-Gb/s is achieved, which enables 20% forward-error correction (FEC) overhead to keep the 400-Gb/s net information rate. The performance after fiber transmission shows large margin for both short range and metro/regional networks. We also extend the advanced DSP for short haul optical access networks by using high order QAMs. We propose and demonstrate a high speed multi-band CAP-WDM-PON system on intensity modulation, direct detection and digital equalizations. A hybrid modified cascaded MMA post-equalization schemes are used to equalize the multi-band CAP-mQAM signals. Using this scheme, we successfully demonstrates 550Gb/s high capacity WDMPON system with 11 WDM channels, 55 sub-bands, and 10-Gb/s per user in the downstream over 40-km SMF.

  7. Multiple Interacting Risk Factors: On Methods for Allocating Risk Factor Interactions.

    PubMed

    Price, Bertram; MacNicoll, Michael

    2015-05-01

    A persistent problem in health risk analysis where it is known that a disease may occur as a consequence of multiple risk factors with interactions is allocating the total risk of the disease among the individual risk factors. This problem, referred to here as risk apportionment, arises in various venues, including: (i) public health management, (ii) government programs for compensating injured individuals, and (iii) litigation. Two methods have been described in the risk analysis and epidemiology literature for allocating total risk among individual risk factors. One method uses weights to allocate interactions among the individual risk factors. The other method is based on risk accounting axioms and finding an optimal and unique allocation that satisfies the axioms using a procedure borrowed from game theory. Where relative risk or attributable risk is the risk measure, we find that the game-theory-determined allocation is the same as the allocation where risk factor interactions are apportioned to individual risk factors using equal weights. Therefore, the apportionment problem becomes one of selecting a meaningful set of weights for allocating interactions among the individual risk factors. Equal weights and weights proportional to the risks of the individual risk factors are discussed. © 2015 Society for Risk Analysis.

  8. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  9. Hybrid time-frequency domain equalization for LED nonlinearity mitigation in OFDM-based VLC systems.

    PubMed

    Li, Jianfeng; Huang, Zhitong; Liu, Xiaoshuang; Ji, Yuefeng

    2015-01-12

    A novel hybrid time-frequency domain equalization scheme is proposed and experimentally demonstrated to mitigate the white light emitting diode (LED) nonlinearity in visible light communication (VLC) systems based on orthogonal frequency division multiplexing (OFDM). We handle the linear and nonlinear distortion separately in a nonlinear OFDM system. The linear part is equalized in frequency domain and the nonlinear part is compensated by an adaptive nonlinear time domain equalizer (N-TDE). The experimental results show that with only a small number of parameters the nonlinear equalizer can efficiently mitigate the LED nonlinearity. With the N-TDE the modulation index (MI) and BER performance can be significantly enhanced.

  10. Differential change in integrative psychotherapy: a re-analysis of a change-factor based RCT in a naturalistic setting.

    PubMed

    Holtforth, Martin Grosse; Wilm, Katharina; Beyermann, Stefanie; Rhode, Annemarie; Trost, Stephanie; Steyer, Rolf

    2011-11-01

    General Psychotherapy (GPT; Grawe, 1997) is a research-informed psychotherapy that combines cognitive-behavioral and process-experiential techniques and that assumes motivational clarification and problem mastery as central mechanisms of change. To isolate the effect of motivational clarification, GPT was compared to a treatment that proscribed motivational clarification (General Psychotherapy Minus Clarification, GPT-C) in a randomized-controlled trial with 67 diagnostically heterogeneous outpatients. Previous analyses demonstrated equal outcomes and some superiority for highly avoidant patients in GPT. Re-analyses using causal-analytic methods confirmed equal changes, but also showed superior effects for GPT in highly symptomatic patients. Results are discussed regarding theory, methodological limitations, and implications for research and practice.

  11. Equalization for a page-oriented optical memory system

    NASA Astrophysics Data System (ADS)

    Trelewicz, Jennifer Q.; Capone, Jeffrey

    1999-11-01

    In this work, a method of decision-feedback equalization is developed for a digital holographic channel that experiences moderate-to-severe imaging errors. Decision feedback is utilized, not only where the channel is well-behaved, but also near the edges of the camera grid that are subject to a high degree of imaging error. In addition to these effects, the channel is worsened by typical problems of holographic channels, including non-uniform illumination, dropouts, and stuck bits. The approach described in this paper builds on established methods for performing trained and blind equalization on time-varying channels. The approach is tested on experimental data sets. On most of these data sets, the method of equalization described in this work delivers at least an order of magnitude improvement in bit-error rate (BER) before error-correction coding (ECC). When ECC is introduced, the approach is able to recover stored data with no errors for many of the tested data sets. Furthermore, a low BER was maintained even over a range of small alignment perturbations in the system. It is believed that this equalization method can allow cost reductions to be made in page-memory systems, by allowing for a larger image area per page or less complex imaging components, without sacrificing the low BER required by data storage applications.

  12. Detection of atheroma using Photofrin IIr and laser-induced fluorescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Vari, Sandor G.; Papazoglou, Theodore G.; van der Veen, Maurits J.; Papaioannou, Thanassis; Fishbein, Michael C.; Chandra, Mudjianto; Beeder, Clain; Shi, Wei-Qiang; Grundfest, Warren S.

    1991-06-01

    The goal of this study was to investigate laser induced fluorescence spectroscopy (LIFS) as a method of localization of atherosclerotic lesions not visible by angiography using Photofrin IIr enhanced fluorescence. Twenty-four New Zealand White rabbits divided into six groups varying in type of arterial wall lesion and Photofrin IIr administration time (i.v.) were used. Aortic wall fluorescence signals were acquired from the aortic arch to iliac bifurcation. The output of a He-Cd laser (442 nm, 17 mW) was directed at the arterial wall through a 400 micron fiber. The fluorescence signal created in the arterial wall was collected via the same fiber and analyzed by an optical multi-channel analyzer (OMA). The ratio of fluorescence intensities at 630 nm (Photofrin IIr) and 540 nm (autofluorescence of artery wall) was analyzed (I630nm/I540nm). Intensity ratio values 24 hours after administration of Photofrin IIr were found to be as follows: in normal artery wall of 0.30 +/- 0.14 (n equals 3), in mechanically damaged wall of 0.91 +/- 0.65 (n equals 2) and, in atheromatous tissue, 0.88 +/- 0.54 (n equals 4). The intensity ratio of atheromatous tissue without Photofrin IIr was 0.23 +/- 0.04 (n equals 7). These results suggest that the use of Photofrin IIr allows in vivo atheroma detection by LIFS because of its ability to accumulate in atheroma. In addition, accumulation of Photofrin IIr was found in artery walls traumatized by balloon catheter intervention. Using this method, a catheter-based LIFS system may be developed for atheroma detection.

  13. Stochastic system identification in structural dynamics

    USGS Publications Warehouse

    Safak, Erdal

    1988-01-01

    Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.

  14. A Synthetic Quadrature Phase Detector/Demodulator for Fourier Transform Transform Spectrometers

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2008-01-01

    A method is developed to demodulate (velocity correct) Fourier transform spectrometer (FTS) data that is taken with an analog to digital converter that digitizes equally spaced in time. This method makes it possible to use simple low cost, high resolution audio digitizers to record high quality data without the need for an event timer or quadrature laser hardware, and makes it possible to use a metrology laser of any wavelength. The reduced parts count and simplicity implementation makes it an attractive alternative in space based applications when compared to previous methods such as the Brault algorithm.

  15. Person identification by using 3D palmprint data

    NASA Astrophysics Data System (ADS)

    Bai, Xuefei; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2016-11-01

    Person identification based on biometrics is drawing more and more attentions in identity and information safety. This paper presents a biometric system to identify person using 3D palmprint data, including a non-contact system capturing 3D palmprint quickly and a method identifying 3D palmprint fast. In order to reduce the effect of slight shaking of palm on the data accuracy, a DLP (Digital Light Processing) projector is utilized to trigger a CCD camera based on structured-light and triangulation measurement and 3D palmprint data could be gathered within 1 second. Using the obtained database and the PolyU 3D palmprint database, feature extraction and matching method is presented based on MCI (Mean Curvature Image), Gabor filter and binary code list. Experimental results show that the proposed method can identify a person within 240 ms in the case of 4000 samples. Compared with the traditional 3D palmprint recognition methods, the proposed method has high accuracy, low EER (Equal Error Rate), small storage space, and fast identification speed.

  16. District nursing workforce planning: a review of the methods.

    PubMed

    Reid, Bernie; Kane, Kay; Curran, Carol

    2008-11-01

    District nursing services in Northern Ireland face increasing demands and challenges which may be responded to by effective and efficient workforce planning and development. The aim of this paper is to critically analyse district nursing workforce planning and development methods, in an attempt to find a suitable method for Northern Ireland. A systematic analysis of the literature reveals four methods: professional judgement; population-based health needs; caseload analysis and dependency-acuity. Each method has strengths and weaknesses. Professional judgement offers a 'belt and braces' approach but lacks sensitivity to fluctuating patient numbers. Population-based health needs methods develop staffing algorithms that reflect deprivation and geographical spread, but are poorly understood by district nurses. Caseload analysis promotes equitable workloads but poorly performing district nursing localities may continue if benchmarking processes only consider local data. Dependency-acuity methods provide a means of equalizing and prioritizing workload but are prone to district nurses overstating factors in patient dependency or understating carers' capability. In summary a mixed method approach is advocated to evaluate and adjust the size and mix of district nursing teams using empirically determined patient dependency and activity-based variables based on the population's health needs.

  17. Bayes factors based on robust TDT-type tests for family trio design.

    PubMed

    Yuan, Min; Pan, Xiaoqing; Yang, Yaning

    2015-06-01

    Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.

  18. Aspects of Equality in Mandatory Partnerships - From the Perspective of Municipal Care in Norway.

    PubMed

    Kirchhoff, Ralf; Ljunggren, Birgitte

    2016-05-18

    This paper raises questions about equality in partnerships, since imbalance in partnerships may effect collaboration outcomes in integrated care. We address aspects of equality in mandatory, public-public partnerships, from the perspective of municipal care. We have developed a questionnaire wherein the Norwegian Coordination Reform is an illustrative example. The following research question is addressed: What equality dimensions are important for municipals related to mandatory partnerships with hospitals? Since we did not find any instrument to measure equality in partnerships, an explorative design was chosen. The development of the instrument was based on the theory on partnership and knowledge about the field and context. A national online survey was emitted to all 429 Norwegian municipalities in 2013. The response rate was in total 58 percent (n = 248). The data were mainly analysed using Principal component analysis. It seems that the two dimensions "learning and expertise equality" and "contractual equality" collects reliable and valid data to measure aspects of equality in partnerships. Partnerships are usually based on voluntarism. The results indicate that mandatory partnerships, within a public health care system, can be appropriate to equalize partnerships between health care providers at different care levels.

  19. Halving It All: How Equally Shared Parenting Works.

    ERIC Educational Resources Information Center

    Deutsch, Francine M.

    Noting that details of everyday life contribute to parental equality or inequality, this qualitative study focused on how couples transformed parental roles to create truly equal families. Participating in the study were 88 couples in 4 categories, based on division of parental responsibilities: equal sharers, 60-40 couples, 75-25 couples, and…

  20. Comparison of three Bayesian methods to estimate posttest probability in patients undergoing exercise stress testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morise, A.P.; Duval, R.D.

    To determine whether recent refinements in Bayesian methods have led to improved diagnostic ability, 3 methods using Bayes' theorem and the independence assumption for estimating posttest probability after exercise stress testing were compared. Each method differed in the number of variables considered in the posttest probability estimate (method A = 5, method B = 6 and method C = 15). Method C is better known as CADENZA. There were 436 patients (250 men and 186 women) who underwent stress testing (135 had concurrent thallium scintigraphy) followed within 2 months by coronary arteriography. Coronary artery disease ((CAD), at least 1 vesselmore » with greater than or equal to 50% diameter narrowing) was seen in 169 (38%). Mean pretest probabilities using each method were not different. However, the mean posttest probabilities for CADENZA were significantly greater than those for method A or B (p less than 0.0001). Each decile of posttest probability was compared to the actual prevalence of CAD in that decile. At posttest probabilities less than or equal to 20%, there was underestimation of CAD. However, at posttest probabilities greater than or equal to 60%, there was overestimation of CAD by all methods, especially CADENZA. Comparison of sensitivity and specificity at every fifth percentile of posttest probability revealed that CADENZA was significantly more sensitive and less specific than methods A and B. Therefore, at lower probability thresholds, CADENZA was a better screening method. However, methods A or B still had merit as a means to confirm higher probabilities generated by CADENZA (especially greater than or equal to 60%).« less

  1. Altitudinal patterns of plant diversity on the Jade Dragon Snow Mountain, southwestern China.

    PubMed

    Xu, Xiang; Zhang, Huayong; Tian, Wang; Zeng, Xiaoqiang; Huang, Hai

    2016-01-01

    Understanding altitudinal patterns of biological diversity and their underlying mechanisms is critically important for biodiversity conservation in mountainous regions. The contribution of area to plant diversity patterns is widely acknowledged and may mask the effects of other determinant factors. In this context, it is important to examine altitudinal patterns of corrected taxon richness by eliminating the area effect. Here we adopt two methods to correct observed taxon richness: a power-law relationship between richness and area, hereafter "method 1"; and richness counted in equal-area altitudinal bands, hereafter "method 2". We compare these two methods on the Jade Dragon Snow Mountain, which is the nearest large-scale altitudinal gradient to the Equator in the Northern Hemisphere. We find that seed plant species richness, genus richness, family richness, and species richness of trees, shrubs, herbs and Groups I-III (species with elevational range size <150, between 150 and 500, and >500 m, respectively) display distinct hump-shaped patterns along the equal-elevation altitudinal gradient. The corrected taxon richness based on method 2 (TRcor2) also shows hump-shaped patterns for all plant groups, while the one based on method 1 (TRcor1) does not. As for the abiotic factors influencing the patterns, mean annual temperature, mean annual precipitation, and mid-domain effect explain a larger part of the variation in TRcor2 than in TRcor1. In conclusion, for biodiversity patterns on the Jade Dragon Snow Mountain, method 2 preserves the significant influences of abiotic factors to the greatest degree while eliminating the area effect. Our results thus reveal that although the classical method 1 has earned more attention and approval in previous research, method 2 can perform better under certain circumstances. We not only confirm the essential contribution of method 1 in community ecology, but also highlight the significant role of method 2 in eliminating the area effect, and call for more application of method 2 in further macroecological studies.

  2. Computer-Aided Evaluation of Blood Vessel Geometry From Acoustic Images.

    PubMed

    Lindström, Stefan B; Uhlin, Fredrik; Bjarnegård, Niclas; Gylling, Micael; Nilsson, Kamilla; Svensson, Christina; Yngman-Uhlin, Pia; Länne, Toste

    2018-04-01

    A method for computer-aided assessment of blood vessel geometries based on shape-fitting algorithms from metric vision was evaluated. Acoustic images of cross sections of the radial artery and cephalic vein were acquired, and medical practitioners used a computer application to measure the wall thickness and nominal diameter of these blood vessels with a caliper method and the shape-fitting method. The methods performed equally well for wall thickness measurements. The shape-fitting method was preferable for measuring the diameter, since it reduced systematic errors by up to 63% in the case of the cephalic vein because of its eccentricity. © 2017 by the American Institute of Ultrasound in Medicine.

  3. Real-Time PCR-Based Quantitation Method for the Genetically Modified Soybean Line GTS 40-3-2.

    PubMed

    Kitta, Kazumi; Takabatake, Reona; Mano, Junichi

    2016-01-01

    This chapter describes a real-time PCR-based method for quantitation of the relative amount of genetically modified (GM) soybean line GTS 40-3-2 [Roundup Ready(®) soybean (RRS)] contained in a batch. The method targets a taxon-specific soybean gene (lectin gene, Le1) and the specific DNA construct junction region between the Petunia hybrida chloroplast transit peptide sequence and the Agrobacterium 5-enolpyruvylshikimate-3-phosphate synthase gene (epsps) sequence present in GTS 40-3-2. The method employs plasmid pMulSL2 as a reference material in order to quantify the relative amount of GTS 40-3-2 in soybean samples using a conversion factor (Cf) equal to the ratio of the RRS-specific DNA to the taxon-specific DNA in representative genuine GTS 40-3-2 seeds.

  4. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  5. Equalizing secondary path effects using the periodicity of fMRI acoustic noise.

    PubMed

    Kannan, Govind; Milani, Ali A; Panahi, Issa; Briggs, Richard

    2008-01-01

    Non-minimum phase secondary path has a direct effect on achieving a desired noise attenuation level in active noise control (ANC) systems. The adaptive noise canceling filter is often a causal FIR filter which may not be able to sufficiently equalize the effect of a non-minimum phase secondary path, since in theory only a non-causal filter can equalize it. However a non-causal stable filter can be found to equalize the non-minimum phase effect of secondary path. Realization of non-causal stable filters requires knowledge of future values of input signal. In this paper we develop methods for equalizing the non-minimum phase property of the secondary path and improving the performance of an ANC system by exploiting the periodicity of fMRI acoustic noise. It has been shown that the scanner noise component is highly periodic and hence predictable which enables easy realization of non-causal filtering. Improvement in performance due to the proposed methods (with and without the equalizer) is shown for periodic fMRI acoustic noise.

  6. Spectrophotometric total reducing sugars assay based on cupric reduction.

    PubMed

    Başkan, Kevser Sözgen; Tütem, Esma; Akyüz, Esin; Özen, Seda; Apak, Reşat

    2016-01-15

    As the concentration of reducing sugars (RS) is controlled by European legislation for certain specific food and beverages, a simple and sensitive spectrophotometric method for the determination of RS in various food products is proposed. The method is based on the reduction of Cu(II) to Cu(I) with reducing sugars in alkaline medium in the presence of 2,9-dimethyl-1,10-phenanthroline (neocuproine: Nc), followed by the formation of a colored Cu(I)-Nc charge-transfer complex. All simple sugars tested had the linear regression equations with almost equal slope values. The proposed method was successfully applied to fresh apple juice, commercial fruit juices, milk, honey and onion juice. Interference effect of phenolic compounds in plant samples was eliminated by a solid phase extraction (SPE) clean-up process. The method was proven to have higher sensitivity and precision than the widely used dinitrosalicylic acid (DNS) colorimetric method. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Rapid calculation of accurate atomic charges for proteins via the electronegativity equalization method.

    PubMed

    Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav

    2013-10-28

    We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.

  8. Underwater Equal-Latency Contours of a Harbor Porpoise (Phocoena phocoena) for Tonal Signals Between 0.5 and 125 kHz.

    PubMed

    Wensveen, Paul J; Huijser, Léonie A E; Hoek, Lean; Kastelein, Ronald A

    2016-01-01

    Loudness perception can be studied based on the assumption that sounds of equal loudness elicit equal reaction time (RT; or "response latency"). We measured the underwater RTs of a harbor porpoise to narrowband frequency-modulated sounds and constructed six equal-latency contours. The contours paralleled the audiogram at low sensation levels (high RTs). At high-sensation levels, contours flattened between 0.5 and 31.5 kHz but dropped substantially (RTs shortened) beyond those frequencies. This study suggests that equal-latency-based frequency weighting can emulate noise perception in porpoises for low and middle frequencies but that the RT-loudness correlation is relatively weak for very high frequencies.

  9. A Universally Applicable and Rapid Method for Measuring the Growth of Streptomyces and Other Filamentous Microorganisms by Methylene Blue Adsorption-Desorption

    PubMed Central

    Fischer, Marco

    2013-01-01

    Quantitative assessment of growth of filamentous microorganisms, such as streptomycetes, is generally restricted to determination of dry weight. Here, we describe a straightforward methylene blue-based sorption assay to monitor microbial growth quantitatively, simply, and rapidly. The assay is equally applicable to unicellular and filamentous bacterial and eukaryotic microorganisms. PMID:23666340

  10. Cognitive Profile in Young Adults Born Preterm at Very Low Birthweight

    ERIC Educational Resources Information Center

    Lohaugen, Gro C. C.; Gramstad, Arne; Evensen, Kari Anne I.; Martinussen, Marit; Lindqvist, Susanne; Indredavik, Marit; Vik, Torstein; Brubakk, Ann-Mari; Skranes, Jon

    2010-01-01

    Aim: The aim of this study was to assess cognitive function at the age of 19 years in individuals of very low birthweight (VLBW; less than or equal to 1500g) and in term-born comparison individuals. Method: In this hospital-based follow-up study, 55 VLBW participants (30 males, 25 females; mean birthweight 1217g, SD 233g; mean gestational age…

  11. Feasibility, Reliability and Validity of the Dutch Translation of the Anxiety, Depression and Mood Scale in Older Adults with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Hermans, Heidi; Jelluma, Naftha; van der Pas, Femke H.; Evenhuis, Heleen M.

    2012-01-01

    Background: The informant-based Anxiety, Depression And Mood Scale was translated into Dutch and its feasibility, reliability and validity in older adults (aged greater than or equal to 50 years) with intellectual disabilities (ID) was studied. Method: Test-retest (n = 93) and interrater reliability (n = 83), and convergent (n = 202 and n = 787),…

  12. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  13. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  14. Surgical gesture classification from video and kinematic data.

    PubMed

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  16. Using biomarker data to adjust estimates of the distribution of usual intakes for misreporting: application to energy intake in the US population.

    PubMed

    Yanetz, Rivka; Kipnis, Victor; Carroll, Raymond J; Dodd, Kevin W; Subar, Amy F; Schatzkin, Arthur; Freedman, Laurence S

    2008-03-01

    It is now well-established that individuals misreport their dietary intake. We propose a new method (National Research Council-Biomarker [NRC-B]) for estimating population distributions of usual dietary intake from national survey 24-hour recall data, using additional biomarker data from an external study to adjust for such dietary misreporting. NRC-B is an extension of the NRC method, and is based upon two developed assumptions: the ratio of the mean of true intake to that of reported intake is equal in the survey and external biomarker study; and the ratio of the variance of true intake to that of reported intake is equal in these two studies. NRC-B adjusts the usual intake distribution both for within-person variation and for bias (underreporting) that occur with 24-hour recall reports. Using doubly labeled water ((2)H(2)(18)O) measurements from the Observing Protein and Energy Nutrition study, we applied NRC-B to data on energy intake for adults aged 40 to 69 years from two national surveys, the Continuing Survey of Food Intakes by Individuals and National Health and Nutrition Examination Survey. We compared the results with the NRC and traditional methods that used only the survey data to estimate dietary intake distributions. Estimated distributions from NRC-B and NRC were much narrower and less skewed than from the traditional method. However, unlike NRC, the median of the NRC-B based distribution was higher by 8% to 16% than the traditional method in our examples. The proposed method adjusts for the well-documented problem of underreporting of energy intake.

  17. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  18. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  19. Powder Metallurgy Reconditioning of Food and Processing Equipment Components

    NASA Astrophysics Data System (ADS)

    Nafikov, M. Z.; Aipov, R. S.; Konnov, A. Yu.

    2017-12-01

    A powder metallurgy method is developed to recondition the worn surfaces of food and processing equipment components. A combined additive is composed to minimize the powder losses in sintering. A technique is constructed to determine the powder consumption as a function of the required metallic coating thickness. A rapid method is developed to determine the porosity of the coating. The proposed technology is used to fabricate a wear-resistant defectless metallic coating with favorable residual stresses, and the adhesive strength of this coating is equal to the strength of the base metal.

  20. Estimation of lean and fat composition of pork ham using image processing measurements

    NASA Astrophysics Data System (ADS)

    Jia, Jiancheng; Schinckel, Allan P.; Forrest, John C.

    1995-01-01

    This paper presents a method of estimating the lean and fat composition in pork ham from cross-sectional area measurements using image processing technology. The relationship between the quantity of ham lean and fat mass with the ham lean and fat areas was studied. The prediction equations for pork ham composition based on the ham cross-sectional area measurements were developed. The results show that ham lean weight was related to the ham lean area (r equals .75, P < .0001) while ham fat weight was related tot the ham fat area (r equals .79, P equals .0001). Ham lean weight was highly related to the product of ham total weight times percentage ham lean area (r equals .96, P < .0001). Ham fat weight was highly related to the product of ham total weight times percentage ham fat area (r equals .88, P < .0001). The best combination of independent variables for estimating ham lean weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 92%. The best combination of independent variables for estimating ham fat weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 78%. Prediction equations with either two or three independent variables did not significantly increase the accuracy of prediction. The results of this study indicate that the weight of ham lean and fat could be predicted from ham cross-sectional area measurements using image analysis in combination with wholesale ham weight.

  1. Symbolic and Nonsymbolic Equivalence Tasks: The Influence of Symbols on Students with Mathematics Difficulty

    ERIC Educational Resources Information Center

    Driver, Melissa K.; Powell, Sarah R.

    2015-01-01

    Students often experience difficulty with attaching meaning to mathematics symbols. Many students react to symbols, such as the equal sign, as a command to "do something" or "write an answer" without reflecting upon the proper relational meaning of the equal sign. One method for assessing equal-sign understanding is through…

  2. Appropriate Statistical Analysis for Two Independent Groups of Likert-Type Data

    ERIC Educational Resources Information Center

    Warachan, Boonyasit

    2011-01-01

    The objective of this research was to determine the robustness and statistical power of three different methods for testing the hypothesis that ordinal samples of five and seven Likert categories come from equal populations. The three methods are the two sample t-test with equal variances, the Mann-Whitney test, and the Kolmogorov-Smirnov test. In…

  3. Multi-party quantum private comparison based on the entanglement swapping of d-level cat states and d-level Bell states

    NASA Astrophysics Data System (ADS)

    Zhao-Xu, Ji; Tian-Yu, Ye

    2017-07-01

    In this paper, a novel multi-party quantum private comparison protocol with a semi-honest third party (TP) is proposed based on the entanglement swapping of d-level cat states and d-level Bell states. Here, TP is allowed to misbehave on his own, but will not conspire with any party. In our protocol, n parties employ unitary operations to encode their private secrets and can compare the equality of their private secrets within one time execution of the protocol. Our protocol can withstand both the outside attacks and the participant attacks on the condition that none of the QKD methods is adopted to generate keys for security. One party cannot obtain other parties' secrets except for the case that their secrets are identical. The semi-honest TP cannot learn any information about these parties' secrets except the end comparison result on whether all private secrets from n parties are equal.

  4. Anisotropic surface acoustic waves in tungsten/lithium niobate phononic crystals

    NASA Astrophysics Data System (ADS)

    Sun, Jia-Hong; Yu, Yuan-Hai

    2018-02-01

    Phononic crystals (PnC) were known for acoustic band gaps for different acoustic waves. PnCs were already applied in surface acoustic wave (SAW) devices as reflective gratings based on the band gaps. In this paper, another important property of PnCs, the anisotropic propagation, was studied. PnCs made of circular tungsten films on a lithium niobate substrate were analyzed by finite element method. Dispersion curves and equal frequency contours of surface acoustic waves in PnCs of various dimensions were calculated to study the anisotropy. The non-circular equal frequency contours and negative refraction of group velocity were observed. Then PnC was applied as an acoustic lens based on the anisotropic propagation. Trajectory of SAW passing PnC lens was calculated and transmission of SAW was optimized by selecting proper layers of lens and applying tapered PnC. The result showed that PnC lens can suppress diffraction of surface waves effectively and improve the performance of SAW devices.

  5. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  6. Optical study of Erbium-doped-porous silicon based planar waveguides

    NASA Astrophysics Data System (ADS)

    Najar, A.; Ajlani, H.; Charrier, J.; Lorrain, N.; Haesaert, S.; Oueslati, M.; Haji, L.

    2007-06-01

    Planar waveguides were formed from porous silicon layers obtained on P + substrates. These waveguides were then doped by erbium using an electrochemical method. Erbium concentration in the range 2.2-2.5 at% was determined by energy dispersive X-ray (EDX) analysis performed on SEM cross sections. The refractive index of layers was studied before and after doping and thermal treatments. The photoluminescence of Er 3+ ions in the IR range and the decay curve of the 1.53 μm emission peak were studied as a function of the excitation power. The value of excited Er density was equal to 0.07%. Optical loss contributions were analyzed on these waveguides and the losses were equal to 1.1 dB/cm at 1.55 μm after doping.

  7. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  8. Aren't We There Yet? Why Re-Invigorating the Equality Agenda Is an Institutional Priority

    ERIC Educational Resources Information Center

    Ruebain, David

    2012-01-01

    Perhaps more than any other country in Europe, the UK has well-established equality law and practice, originating with the Race Relations Act of 1965, but based on a longer history of struggle for equality. In 2011 public bodies, including higher education institutions (HEIs), were required to respond to the implementation of the Equality Act…

  9. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  10. Finger vein verification system based on sparse representation.

    PubMed

    Xin, Yang; Liu, Zhi; Zhang, Haixia; Zhang, Hong

    2012-09-01

    Finger vein verification is a promising biometric pattern for personal identification in terms of security and convenience. The recognition performance of this technology heavily relies on the quality of finger vein images and on the recognition algorithm. To achieve efficient recognition performance, a special finger vein imaging device is developed, and a finger vein recognition method based on sparse representation is proposed. The motivation for the proposed method is that finger vein images exhibit a sparse property. In the proposed system, the regions of interest (ROIs) in the finger vein images are segmented and enhanced. Sparse representation and sparsity preserving projection on ROIs are performed to obtain the features. Finally, the features are measured for recognition. An equal error rate of 0.017% was achieved based on the finger vein image database, which contains images that were captured by using the near-IR imaging device that was developed in this study. The experimental results demonstrate that the proposed method is faster and more robust than previous methods.

  11. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  12. Characterisation of structure-borne sound source using reception plate method.

    PubMed

    Putra, A; Saari, N F; Bakri, H; Ramlan, R; Dan, R M

    2013-01-01

    A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.

  13. Mitigating effect on turbulent scintillation using non-coherent multi-beam overlapped illumination

    NASA Astrophysics Data System (ADS)

    Zhou, Lu; Tian, Yuzhen; Wang, Rui; Wang, Tingfeng; Sun, Tao; Wang, Canjin; Yang, Xiaotian

    2017-12-01

    In order to find an effective method to mitigate the turbulent scintillation for applications involved laser propagation through atmosphere, we demonstrated one model using non-coherent multi-beam overlapped illumination. Based on lognormal distribution and the statistical moments of overlapped field, the reduction effect on turbulent scintillation of this method was discussed and tested against numerical wave optics simulation and laboratory experiments with phase plates. Our analysis showed that the best mitigating effect, the scintillation index of overlapped field reduced to 1/N of that when using single beam illuminating, could be obtained using this method when the intensity of N emitting beams equaled to each other.

  14. The method for froth floatation condition recognition based on adaptive feature weighted

    NASA Astrophysics Data System (ADS)

    Wang, Jieran; Zhang, Jun; Tian, Jinwen; Zhang, Daimeng; Liu, Xiaomao

    2018-03-01

    The fusion of foam characteristics can play a complementary role in expressing the content of foam image. The weight of foam characteristics is the key to make full use of the relationship between the different features. In this paper, an Adaptive Feature Weighted Method For Froth Floatation Condition Recognition is proposed. Foam features without and with weights are both classified by using support vector machine (SVM).The classification accuracy and optimal equaling algorithm under the each ore grade are regarded as the result of the adaptive feature weighting algorithm. At the same time the effectiveness of adaptive weighted method is demonstrated.

  15. Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.

    PubMed

    Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M

    2017-05-15

    We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.

  16. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  17. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  18. Do Low-Income Students Have Equal Access to the Highest-Performing Teachers? Technical Appendix. NCEE 2011-4016

    ERIC Educational Resources Information Center

    Glazerman, Steven; Max, Jeffrey

    2011-01-01

    This appendix describes the methods and provides further detail to support the evaluation brief, "Do Low-Income Students Have Equal Access to the Highest-Performing Teachers?" (Contains 8 figures, 6 tables and 5 footnotes.) [For the main report, "Do Low-Income Students Have Equal Access to the Highest-Performing Teachers? NCEE…

  19. Determination of dopamine hydrochloride by host-guest interaction based on water-soluble pillar[5]arene

    NASA Astrophysics Data System (ADS)

    Xiao, Xue-Dong; Shi, Lin; Guo, Li-Hui; Wang, Jun-Wen; Zhang, Xiang

    2017-02-01

    The supramolecular interaction between the water-soluble pillar[5]arene (WP[5]) as host and dopamine hydrochloride (DH) as guest was studied by spectrofluorometry. The fluorescence intensity of DH gradually decreased with increasing WP[5] concentration, and the possible interaction mechanism between WP[5] and DH was confirmed by 1H NMR, 2D NOESY, and molecular modelling. Based on significant DH fluorescence, a highly sensitive and selective method for DH determination was developed for the first time. The fluorescence intensity was measured at 312 nm, with excitation at 285 nm. The effects of pH, temperature, and reaction time on the fluorescence spectra of the WP[5]-DH complex were investigated. A linear relationship between fluorescence intensity and DH concentration in the range of 0.07-6.2 μg mL- 1 was obtained. The corresponding linear regression equation is ΔF = 25.76 C + 13.56 (where C denotes the concentration in μg mL- 1), with the limit of detection equal to 0.03 μg mL- 1 and the correlation coefficient equal to 0.9996. This method can be used for the determination of dopamine in injection and urine samples. In addition, the WP[5]-DH complex has potential applications in fluorescent sensing and pharmacokinetics studies of DH.

  20. Multiple point least squares equalization in a room

    NASA Technical Reports Server (NTRS)

    Elliott, S. J.; Nelson, P. A.

    1988-01-01

    Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.

  1. Comprehensive Fractal Description of Porosity of Coal of Different Ranks

    PubMed Central

    Ren, Jiangang; Zhang, Guocheng; Song, Zhimin; Liu, Gaofeng; Li, Bing

    2014-01-01

    We selected, as the objects of our research, lignite from the Beizao Mine, gas coal from the Caiyuan Mine, coking coal from the Xiqu Mine, and anthracite from the Guhanshan Mine. We used the mercury intrusion method and the low-temperature liquid nitrogen adsorption method to analyze the structure and shape of the coal pores and calculated the fractal dimensions of different aperture segments in the coal. The experimental results show that the fractal dimension of the aperture segment of lignite, gas coal, and coking coal with an aperture of greater than or equal to 10 nm, as well as the fractal dimension of the aperture segment of anthracite with an aperture of greater than or equal to 100 nm, can be calculated using the mercury intrusion method; the fractal dimension of the coal pore, with an aperture range between 2.03 nm and 361.14 nm, can be calculated using the liquid nitrogen adsorption method, of which the fractal dimensions bounded by apertures of 10 nm and 100 nm are different. Based on these findings, we defined and calculated the comprehensive fractal dimensions of the coal pores and achieved the unity of fractal dimensions for full apertures of coal pores, thereby facilitating, overall characterization for the heterogeneity of the coal pore structure. PMID:24955407

  2. Quantifying Abdominal Adipose Tissue and Thigh Muscle Volume and Hepatic Proton Density Fat Fraction: Repeatability and Accuracy of an MR Imaging-based, Semiautomated Analysis Method.

    PubMed

    Middleton, Michael S; Haufe, William; Hooker, Jonathan; Borga, Magnus; Dahlqvist Leinhard, Olof; Romu, Thobias; Tunón, Patrik; Hamilton, Gavin; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B

    2017-05-01

    Purpose To determine the repeatability and accuracy of a commercially available magnetic resonance (MR) imaging-based, semiautomated method to quantify abdominal adipose tissue and thigh muscle volume and hepatic proton density fat fraction (PDFF). Materials and Methods This prospective study was institutional review board- approved and HIPAA compliant. All subjects provided written informed consent. Inclusion criteria were age of 18 years or older and willingness to participate. The exclusion criterion was contraindication to MR imaging. Three-dimensional T1-weighted dual-echo body-coil images were acquired three times. Source images were reconstructed to generate water and calibrated fat images. Abdominal adipose tissue and thigh muscle were segmented, and their volumes were estimated by using a semiautomated method and, as a reference standard, a manual method. Hepatic PDFF was estimated by using a confounder-corrected chemical shift-encoded MR imaging method with hybrid complex-magnitude reconstruction and, as a reference standard, MR spectroscopy. Tissue volume and hepatic PDFF intra- and interexamination repeatability were assessed by using intraclass correlation and coefficient of variation analysis. Tissue volume and hepatic PDFF accuracy were assessed by means of linear regression with the respective reference standards. Results Adipose and thigh muscle tissue volumes of 20 subjects (18 women; age range, 25-76 years; body mass index range, 19.3-43.9 kg/m 2 ) were estimated by using the semiautomated method. Intra- and interexamination intraclass correlation coefficients were 0.996-0.998 and coefficients of variation were 1.5%-3.6%. For hepatic MR imaging PDFF, intra- and interexamination intraclass correlation coefficients were greater than or equal to 0.994 and coefficients of variation were less than or equal to 7.3%. In the regression analyses of manual versus semiautomated volume and spectroscopy versus MR imaging, PDFF slopes and intercepts were close to the identity line, and correlations of determination at multivariate analysis (R 2 ) ranged from 0.744 to 0.994. Conclusion This MR imaging-based, semiautomated method provides high repeatability and accuracy for estimating abdominal adipose tissue and thigh muscle volumes and hepatic PDFF. © RSNA, 2017.

  3. The neural bases for valuing social equality.

    PubMed

    Aoki, Ryuta; Yomogida, Yukihito; Matsumoto, Kenji

    2015-01-01

    The neural basis of how humans value and pursue social equality has become a major topic in social neuroscience research. Although recent studies have identified a set of brain regions and possible mechanisms that are involved in the neural processing of equality of outcome between individuals, how the human brain processes equality of opportunity remains unknown. In this review article, first we describe the importance of the distinction between equality of outcome and equality of opportunity, which has been emphasized in philosophy and economics. Next, we discuss possible approaches for empirical characterization of human valuation of equality of opportunity vs. equality of outcome. Understanding how these two concepts are distinct and interact with each other may provide a better explanation of complex human behaviors concerning fairness and social equality. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  4. Comparison of NRZ and duo-binary format in adaptive equalization assisted 10G-optics based 25G-EPON

    NASA Astrophysics Data System (ADS)

    Xia, Junqi; Li, Zhengxuan; Li, Yingchun; Xu, Tingting; Chen, Jian; Song, Yingxiong; Wang, Min

    2018-03-01

    We investigate and compare the requirements of FFE/DFE based adaptive equalization techniques for NRZ and Duo-binary based 25-Gb/s transmission, which are two of the most promising schemes for 25G-EPON. A 25-Gb/s transmission system based on 10G optical transceivers is demonstrated and the performance of only FFE and combination of FFE and DFE with different number of taps are compared with two modulation formats. The FFE/DFE based Duo-binary receiver shows better performance than NRZ receiver. For Duo-binary receiver, only 13-tap FFE is needed for BtB case and the combination of 17-tap FFE and 5-tap DFE can achieve a sensitivity of -23.45 dBm in 25 km transmission case, which is ∼0.6 dB better than the best performance of NRZ equalization. In addition, the requirements of training sequence length for FFE/DFE based adaptive equalization is verified. Experimental results show that 400 symbols training length is optimal for the two modulations, which shows a small packet preamble in up-stream burst-mode transmission.

  5. The effect of teaching medical ethics on medical students' moral reasoning.

    PubMed

    Self, D J; Wolinsky, F D; Baldwin, D C

    1989-12-01

    A study assessed the effect of incorporating medical ethics into the medical curriculum and the relative effects of two methods of implementing that curriculum, namely, lecture and case-study discussions. Results indicate a statistically significant increase (p less than or equal to .0001) in the level of moral reasoning of students exposed to the medical ethics course, regardless of format. Moreover, the unadjusted posttest scores indicated that the case-study method was significantly (p less than or equal to .03) more effective than the lecture method in increasing students' level of moral reasoning. When adjustment were made for the pretest scores, however, this difference was not statistically significant (p less than or equal to .18). Regression analysis by linear panel techniques revealed that age, gender, undergraduate grade-point average, and scores on the Medical College Admission Test were not related to the changes in moral-reasoning scores. All of the variance that could be explained was due to the students' being in one of the two experimental groups. In comparison with the control group, the change associated with each experimental format was statistically significant (lecture, p less than or equal to .004; case study, p less than or equal to .0001). Various explanations for these findings and their implications are given.

  6. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  7. Equalizing Si photodetectors fabricated in standard CMOS processes

    NASA Astrophysics Data System (ADS)

    Guerrero, E.; Aguirre, J.; Sánchez-Azqueta, C.; Royo, G.; Gimeno, C.; Celma, S.

    2017-05-01

    This work presents a new continuous-time equalization approach to overcome the limited bandwidth of integrated CMOS photodetectors. It is based on a split-path topology that features completely decoupled controls for boosting and gain; this capability allows a better tuning of the equalizer in comparison with other architectures based on the degenerated differential pair, which is particularly helpful to achieve a proper calibration of the system. The equalizer is intended to enhance the bandwidth of CMOS standard n-well/p-bulk differential photodiodes (DPDs), which falls below 10MHz representing a bottleneck in fully integrated optoelectronic interfaces to fulfill the low-cost requirements of modern smart sensors. The proposed equalizer has been simulated in a 65nm CMOS process and biased with a single supply voltage of 1V, where the bandwidth of the DPD has been increased up to 3 GHz.

  8. Gender equality in India hit by illiteracy, child marriages and violence: a hurdle for sustainable development

    PubMed Central

    Brahmapurkar, Kishor Parashramji

    2017-01-01

    Introduction Gender equality is fundamental to accelerate sustainable development. It is necessary to conduct gender analyses to identify sex and gender-based differences in health risks. This study aimed to find the gender equality in terms of illiteracy, child marriages and spousal violence among women based on data from National Family Health Survey 2015-16 (NFHS-4). Methods This was a descriptive analysis of secondary data of ever-married women onto reproductive age from 15 states and 3 UTs in India of the first phase of NFHS-4. Gender gap related to literacy and child marriage among urban and rural area was compared. Results In rural area all states except Meghalaya and Sikkim had the significantly higher percentage of women's illiteracy as compared to male. Bihar and Madhya Pradesh had higher illiterate women, 53.7% and 48.6% as compared to male, 24.7% and 21.5% respectively (P < 0.000). Child marriages were found to be significantly higher in rural areas as compared to urban areas in four most populated states. Conclusion There is a gender gap between illiteracy with women more affected in rural areas with higher prevalence of child marriages and poor utilization of maternal health services. Also, violence against women is showing an upward trend with declining sex-ratio at birth. PMID:29541324

  9. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  10. Could gender equality in parental leave harm off-springs' mental health? a registry study of the Swedish parental/child cohort of 1988/89

    PubMed Central

    2012-01-01

    Introduction Mental ill-health among children and young adults is a growing public health problem and research into causes involves consideration of family life and gender practice. This study aimed at exploring the association between parents' degree of gender equality in childcare and children's mental ill-health. Methods The population consisted of Swedish parents and their firstborn child in 1988-1989 (N = 118 595 family units) and the statistical method was multiple logistic regression. Gender equality of childcare was indicated by the division of parental leave (1988-1990), and child mental ill-health was indicated by outpatient mental care (2001-2006) and drug prescription (2005-2008), for anxiety and depression. Results The overall finding was that boys with gender traditional parents (mother dominance in childcare) have lower risk of depression measured by outpatient mental care than boys with gender-equal parents, while girls with gender traditional and gender untraditional parents (father dominance in childcare) have lower risk of anxiety measured by drug prescription than girls with gender-equal parents. Conclusions This study suggests that unequal parenting regarding early childcare, whether traditional or untraditional, is more beneficial for offspring's mental health than equal parenting. However, further research is required to confirm our findings and to explore the pathways through which increased gender equality may influence child health. PMID:22463683

  11. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  12. Assessing network scale-up estimates for groups most at risk of HIV/AIDS: evidence from a multiple-method study of heavy drug users in Curitiba, Brazil.

    PubMed

    Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I

    2011-11-15

    One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.

  13. Digital Image Restoration Under a Regression Model - The Unconstrained, Linear Equality and Inequality Constrained Approaches

    DTIC Science & Technology

    1974-01-01

    REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans

  14. A Capabilities Based Critique of Gutmann's Democratic Interpretation of Equal Educational Opportunity

    ERIC Educational Resources Information Center

    DeCesare, Tony

    2016-01-01

    One of Amy Gutmann's important achievements in "Democratic Education" is her development of a "democratic interpretation of equal educational opportunity." This standard of equality demands that "all educable children learn enough to participate effectively in the democratic process." In other words, Gutmann demands…

  15. Hyper-Ramsey spectroscopy of optical clock transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yudin, V. I.; Taichenachev, A. V.; Oates, C. W.

    2010-07-15

    We present nonstandard optical Ramsey schemes that use pulses individually tailored in duration, phase, and frequency to cancel spurious frequency shifts related to the excitation itself. In particular, the field shifts and their uncertainties can be radically suppressed (by two to four orders of magnitude) in comparison with the usual Ramsey method (using two equal pulses) as well as with single-pulse Rabi spectroscopy. Atom interferometers and optical clocks based on two-photon transitions, heavily forbidden transitions, or magnetically induced spectroscopy could significantly benefit from this method. In the latter case, these frequency shifts can be suppressed considerably below a fractional levelmore » of 10{sup -17}. Moreover, our approach opens the door for high-precision optical clocks based on direct frequency comb spectroscopy.« less

  16. Sense and Avoid Safety Analysis for Remotely Operated Unmanned Aircraft in the National Airspace System. Version 5

    NASA Technical Reports Server (NTRS)

    Carreno, Victor

    2006-01-01

    This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.

  17. Design of a 50/50 splitting ratio non-polarizing beam splitter based on the modal method with fused-silica transmission gratings

    NASA Astrophysics Data System (ADS)

    Zhao, Huajun; Yuan, Dairong; Ming, Hai

    2011-04-01

    The optical design of a beam splitter that has a 50/50 splitting ratio regardless of the polarization is presented. The non-polarizing beam splitter (NPBS) is based on the fused-silica rectangular transmission gratings with high intensity tolerance. The modal method has been used to estimate the effective index of the modes excited in the grating region for TE and TM polarizations. If a phase difference equals an odd multiples of π/2 for the first two modes (i.e. modes 0 and 1), the incident light will be diffracted into the 0 and -1 orders with about 50% and 50% diffraction efficiency for TM and TE polarizations, respectively.

  18. Assessment of Commitment to Equal Opportunity Goals in the Military

    DTIC Science & Technology

    1988-09-30

    N ASSESSMENT OF COMMITMENT TO EQUAL OPPORTUNITY GOALS IN THE MILITARX by Carl A. Bartling, Ph.D. Department of Psychology Arkansas Coll*" Batesville...Arkansas for The Defense Equal Opportunity Management Institute Patrick Air Force Base, Florida United States Navy-ASEE 1988 Summer Faculty Research...Commitment to Equal Opportunity Goals in the Military (UNCLASSIFIED) 12. PERSONAL AUTHORM Carl A. Bartling 13. TYPE OF REPORT 113b. TIME COV ERED

  19. Proceedings of the Ship Production Symposium Held in Williamsburg, Virginia on November 1-4, 1993

    DTIC Science & Technology

    1993-11-01

    June 17, 1993. ‘FORAN V30, The Way to CIM from Conceptual Design in Shipbuilding,’ Senermar, Sener Sistemas Marines, S. A., Madrid, Spain. Welsh, M., J...by Crews and Hardrath (7) in the “ Companion Specimen Method - Equal Deformation Equal Life Concept.” The main assumption in their method is that the... Companion Specimen Method,” Experimental Mechanics, V. 23, pp. 313-320, 1966. ASTM E606-80, “Standard Recommended Practice for Constant-Amplitute Low Cycle

  20. Effects of exposure equalization on image signal-to-noise ratios in digital mammography: A simulation study with an anthropomorphic breast phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Xinming; Lai Chaojen; Whitman, Gary J.

    Purpose: The scan equalization digital mammography (SEDM) technique combines slot scanning and exposure equalization to improve low-contrast performance of digital mammography in dense tissue areas. In this study, full-field digital mammography (FFDM) images of an anthropomorphic breast phantom acquired with an anti-scatter grid at various exposure levels were superimposed to simulate SEDM images and investigate the improvement of low-contrast performance as quantified by primary signal-to-noise ratios (PSNRs). Methods: We imaged an anthropomorphic breast phantom (Gammex 169 ''Rachel,'' Gammex RMI, Middleton, WI) at various exposure levels using a FFDM system (Senographe 2000D, GE Medical Systems, Milwaukee, WI). The exposure equalization factorsmore » were computed based on a standard FFDM image acquired in the automatic exposure control (AEC) mode. The equalized image was simulated and constructed by superimposing a selected set of FFDM images acquired at 2, 1, 1/2, 1/4, 1/8, 1/16, and 1/32 times of exposure levels to the standard AEC timed technique (125 mAs) using the equalization factors computed for each region. Finally, the equalized image was renormalized regionally with the exposure equalization factors to result in an appearance similar to that with standard digital mammography. Two sets of FFDM images were acquired to allow for two identically, but independently, formed equalized images to be subtracted from each other to estimate the noise levels. Similarly, two identically but independently acquired standard FFDM images were subtracted to estimate the noise levels. Corrections were applied to remove the excess system noise accumulated during image superimposition in forming the equalized image. PSNRs over the compressed area of breast phantom were computed and used to quantitatively study the effects of exposure equalization on low-contrast performance in digital mammography. Results: We found that the highest achievable PSNR improvement factor was 1.89 for the anthropomorphic breast phantom used in this study. The overall PSNRs were measured to be 79.6 for the FFDM imaging and 107.6 for the simulated SEDM imaging on average in the compressed area of breast phantom, resulting in an average improvement of PSNR by {approx}35% with exposure equalization. We also found that the PSNRs appeared to be largely uniform with exposure equalization, and the standard deviations of PSNRs were estimated to be 10.3 and 7.9 for the FFDM imaging and the simulated SEDM imaging, respectively. The average glandular dose for SEDM was estimated to be 212.5 mrad, {approx}34% lower than that of standard AEC-timed FFDM (323.8 mrad) as a result of exposure equalization for the entire breast phantom. Conclusions: Exposure equalization was found to substantially improve image PSNRs in dense tissue regions and result in more uniform image PSNRs. This improvement may lead to better low-contrast performance in detecting and visualizing soft tissue masses and micro-calcifications in dense tissue areas for breast imaging tasks.« less

  1. Estimating the Effect of Changes in Criterion Score Reliability on the Power of the "F" Test of Equality of Means

    ERIC Educational Resources Information Center

    Feldt, Leonard S.

    2011-01-01

    This article presents a simple, computer-assisted method of determining the extent to which increases in reliability increase the power of the "F" test of equality of means. The method uses a derived formula that relates the changes in the reliability coefficient to changes in the noncentrality of the relevant "F" distribution. A readily available…

  2. [Interlaboratory Study on Evaporation Residue Test for Food Contact Products (Report 1)].

    PubMed

    Ohno, Hiroyuki; Mutsuga, Motoh; Abe, Tomoyuki; Abe, Yutaka; Amano, Homare; Ishihara, Kinuyo; Ohsaka, Ikue; Ohno, Haruka; Ohno, Yuichiro; Ozaki, Asako; Kakihara, Yoshiteru; Kobayashi, Hisashi; Sakuragi, Hiroshi; Shibata, Hiroshi; Shirono, Katsuhiro; Sekido, Haruko; Takasaka, Noriko; Takenaka, Yu; Tajima, Yoshiyasu; Tanaka, Aoi; Tanaka, Hideyuki; Tonooka, Hiroyuki; Nakanishi, Toru; Nomura, Chie; Haneishi, Nahoko; Hayakawa, Masato; Miura, Toshihiko; Yamaguchi, Miku; Watanabe, Kazunari; Sato, Kyoko

    2018-01-01

    An interlaboratory study was performed to evaluate the equivalence between an official method and a modified method of evaporation residue test using three food-simulating solvents (water, 4% acetic acid and 20% ethanol), based on the Japanese Food Sanitation Law for food contact products. Twenty-three laboratories participated, and tested the evaporation residues of nine test solutions as blind duplicates. For evaporation, a water bath was used in the official method, and a hot plate in the modified method. In most laboratories, the test solutions were heated until just prior to evaporation to dryness, and then allowed to dry under residual heat. Statistical analysis revealed that there was no significant difference between the two methods, regardless of the heating equipment used. Accordingly, the modified method provides performance equal to the official method, and is available as an alternative method.

  3. Absolute quantification of DNA methylation using microfluidic chip-based digital PCR.

    PubMed

    Wu, Zhenhua; Bai, Yanan; Cheng, Zule; Liu, Fangming; Wang, Ping; Yang, Dawei; Li, Gang; Jin, Qinghui; Mao, Hongju; Zhao, Jianlong

    2017-10-15

    Hypermethylation of CpG islands in the promoter region of many tumor suppressor genes downregulates their expression and in a result promotes tumorigenesis. Therefore, detection of DNA methylation status is a convenient diagnostic tool for cancer detection. Here, we reported a novel method for the integrative detection of methylation by the microfluidic chip-based digital PCR. This method relies on methylation-sensitive restriction enzyme HpaII, which cleaves the unmethylated DNA strands while keeping the methylated ones intact. After HpaII treatment, the DNA methylation level is determined quantitatively by the microfluidic chip-based digital PCR with the lower limit of detection equal to 0.52%. To validate the applicability of this method, promoter methylation of two tumor suppressor genes (PCDHGB6 and HOXA9) was tested in 10 samples of early stage lung adenocarcinoma and their adjacent non-tumorous tissues. The consistency was observed in the analysis of these samples using our method and a conventional bisulfite pyrosequencing. Combining high sensitivity and low cost, the microfluidic chip-based digital PCR method might provide a promising alternative for the detection of DNA methylation and early diagnosis of epigenetics-related diseases. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. 29 CFR 1620.6 - Coverage is not based on amount of covered activity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Coverage is not based on amount of covered activity. 1620.6 Section 1620.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.6 Coverage is not based on amount of covered activity. The FLSA makes no...

  5. 29 CFR 1620.20 - Pay differentials claimed to be based on extra duties.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Pay differentials claimed to be based on extra duties. 1620.20 Section 1620.20 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.20 Pay differentials claimed to be based on extra duties. Additional...

  6. Displacement sensor based on intra-cavity tuning of dual-frequency gas laser

    NASA Astrophysics Data System (ADS)

    Niu, Haisha; Niu, Yanxiong; Liu, Ning; Li, Jiyang

    2018-01-01

    A nanometer-resolution displacement measurement instrument based on tunable cavity frequency-splitting method is presented. One beam is split into two orthogonally polarized beams when anisotropic element inserted in the cavity. The two beams with fixed frequency difference are modulated by the movement of the reflection mirror. The changing law of the power tuning curves between the total output and the two orthogonally polarized beams is researched, and a method splitting one tuning cycle to four equal parts is proposed based on the changing law, each part corresponds to one-eighth wavelength of displacement. A laser feedback interferometer (LFI) and piezoelectric ceramic are series connected to the sensor head to calibrate the displacement that less than one-eighth wavelength. The displacement sensor achieves to afford measurement range of 20mm with resolution of 6.93nm.

  7. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  8. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  9. Differentiation of constrictive pericarditis from restrictive cardiomyopathy: the case for high-resolution dynamic tomographic imaging

    NASA Astrophysics Data System (ADS)

    Weiss, Robert M.; Otoadese, Eramosele A.; Oren, Ron M.

    1995-05-01

    The syndrome of constrictive pericarditis (CP) presents a diagnostic challenge to the clinician. This study was undertaken to determine whether cine computed tomography (CT), a cardiac imaging technique with excellent temporal and spatial resolution, can reliably demonstrate the unique abnormalities of pericardial anatomy and ventricular physiology present in patients with this condition. A second goal of this study was to determine whether the presence of diseased thickened pericardium, by itself, imparts cardiac impairment due to abnormalities of ventricular diastolic function. Methods: Twelve patients with CP suspected clinically, in whom invasive hemodynamic study was consistent with the diagnosis of CP, underwent cine CT. They were subdivided into Group 1 (CP, N equals 5) and Group 2 (No CP, N equals 7) based on histopathologic evaluation of tissue obtained at the time of surgery or autopsy. A third group consisted of asymptomatic patients with incidentally discovered thickened pericardium at the time of cine CT scanning: Group 3 (ThP, N equals 7). Group 4 (Nl, N equals 7) consisted of healthy volunteer subjects. Results: Pericardial thickness measurements with cine CT clearly distinguished Group 1 (mean equals 10 +/- 2 mm) from Group 2 (mean equals 2 +/- 1 mm), with diagnostic accuracy of 100% compared to histopathological findings. In addition, patients in Group 1 had significantly more brisk early diastolic filling of both left and right ventricles than those in Group 2, which clearly distinguished all patients with, from all patients without CP. Patients in Group 3 had pericardial thicknesses similar to those in Group 1 (mean equals 9 +/- 1 mm, p equals NS), but had patterns of diastolic ventricular filling that were nearly identical to Group 4 (Nl). Conclusions: The abnormalities of anatomy and ventricular function present in the syndrome of constrictive pericarditis are clearly and decisively identified by cine CT. This allows a reliable distinction between patients with constrictive pericarditis and those with cardiomyopathy. The presence of diseased thickened pericardium does not by itself impart impairment of ventricular diastolic function. Thus, definitive diagnosis of constrictive pericarditis requires demonstration of both abnormal anatomy and physiology.

  10. Adaptive frequency-domain equalization in digital coherent optical receivers.

    PubMed

    Faruk, Md Saifuddin; Kikuchi, Kazuro

    2011-06-20

    We propose a novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain adaptive equalizer based on finite-impulse-response (FIR) filters. The proposed equalizer can operate on the input sequence sampled by free-running analog-to-digital converters (ADCs) at the rate of two samples per symbol; therefore, the arbitrary initial sampling phase of ADCs can be adjusted so that the best symbol-spaced sequence is produced. The equalizer can also be configured in the butterfly structure, which enables demultiplexing of polarization tributaries apart from equalization of linear transmission impairments. The performance of the proposed equalization scheme is verified by 40-Gbits/s dual-polarization quadrature phase-shift keying (QPSK) transmission experiments.

  11. METHOD OF OPERATING NUCLEAR REACTORS

    DOEpatents

    Untermyer, S.

    1958-10-14

    A method is presented for obtaining enhanced utilization of natural uranium in heavy water moderated nuclear reactors by charging the reactor with an equal number of fuel elements formed of natural uranium and of fuel elements formed of uranium depleted in U/sup 235/ to the extent that the combination will just support a chain reaction. The reactor is operated until the rate of burnup of plutonium equals its rate of production, the fuel elements are processed to recover plutonium, the depleted uranium is discarded, and the remaining uranium is formed into fuel elements. These fuel elements are charged into a reactor along with an equal number of fuel elements formed of uranium depleted in U/sup 235/ to the extent that the combination will just support a chain reaction, and reuse of the uranium is continued as aforesaid until it wlll no longer support a chain reaction when combined with an equal quantity of natural uranium.

  12. Scandinavian Approaches to Gender Equality in Academia: A Comparative Study

    ERIC Educational Resources Information Center

    Nielsen, Mathias Wullum

    2017-01-01

    This study investigates how Denmark, Norway, and Sweden approach issues of gender equality in research differently. Based on a comparative document analysis of gender equality activities in six Scandinavian universities, together with an examination of the legislative and political frameworks surrounding these activities, the article provides new…

  13. Integrating the ACR Appropriateness Criteria Into the Radiology Clerkship: Comparison of Didactic Format and Group-Based Learning.

    PubMed

    Stein, Marjorie W; Frank, Susan J; Roberts, Jeffrey H; Finkelstein, Malka; Heo, Moonseong

    2016-05-01

    The aim of this study was to determine whether group-based or didactic teaching is more effective to teach ACR Appropriateness Criteria to medical students. An identical pretest, posttest, and delayed multiple-choice test was used to evaluate the efficacy of the two teaching methods. Descriptive statistics comparing test scores were obtained. On the posttest, the didactic group gained 12.5 points (P < .0001), and the group-based learning students gained 16.3 points (P < .0001). On the delayed test, the didactic group gained 14.4 points (P < .0001), and the group-based learning students gained 11.8 points (P < .001). The gains in scores on both tests were statistically significant for both groups. However, the differences in scores were not statistically significant comparing the two educational methods. Compared with didactic lectures, group-based learning is more enjoyable, time efficient, and equally efficacious. The choice of educational method can be individualized for each institution on the basis of group size, time constraints, and faculty availability. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. Finger vein recognition based on personalized weight maps.

    PubMed

    Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu

    2013-09-10

    Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition.

  15. Finger Vein Recognition Based on Personalized Weight Maps

    PubMed Central

    Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu

    2013-01-01

    Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition. PMID:24025556

  16. Improving the convergence rate in affine registration of PET and SPECT brain images using histogram equalization.

    PubMed

    Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A

    2013-01-01

    A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

  17. Computing the Envelope for Stepwise-Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.

  18. The importance of activity-based methods in radiology and the technology that now makes this possible.

    PubMed

    Monge, Paul

    2006-01-01

    Activity-based methods serve as a dynamic process that has allowed many other industries to reduce and control their costs, increase productivity, and streamline their processes while improving product quality and service. The method could serve the healthcare industry in an equally beneficial way. Activity-based methods encompass both activity based costing (ABC) and activity-based management (ABM). ABC is a cost management approach that links resource consumption to activities that an enterprise performs, and then assigns those activities and their associated costs to customers, products, or product lines. ABM uses the resource assignments derived in ABC so that operation managers can improve their departmental processes and workflows. There are three fundamental problems with traditional cost systems. First, traditional systems fail to reflect the underlying diversity of work taking place within an enterprise. Second, it uses allocations that are, for the most part, arbitrary Single step allocations fail to reflect the real work-the activities being performed and the associate resources actually consumed. Third, they only provide a cost number that, standing alone, does not provide any guidance on how to improve performance by lowering cost or enhancing throughput.

  19. Computerized In Vitro Test for Chemical Toxicity Based on Tetrahymena Swimming Patterns

    NASA Technical Reports Server (NTRS)

    Noever, David A.; Matsos, Helen C.; Cronise, Raymond J.; Looger, Loren L.; Relwani, Rachna A.; Johnson, Jacqueline U.

    1994-01-01

    An apparatus and a method for rapidly determining chemical toxicity have been evaluated as an alternative to the rabbit eye initancy test (Draize). The toxicity monitor includes an automated scoring of how motile biological cells (Tetrahymena pyriformis) slow down or otherwise change their swimming patterns in a hostile chemical environment. The method, called the motility assay (MA), is tested for 30 s to determine the chemical toxicity in 20 aqueous samples containing trace organics and salts. With equal or better detection limits, results compare favorably to in vivo animal tests of eye irritancy.

  20. Parameter analysis of a photonic crystal fiber with raised-core index profile based on effective index method

    NASA Astrophysics Data System (ADS)

    Seraji, Faramarz E.; Rashidi, Mahnaz; Khasheie, Vajieh

    2006-08-01

    Photonic crystal fibers (PCFs) with a stepped raised-core profile and one layer equally spaced holes in the cladding are analyzed. Using effective index method and considering a raised step refractive index difference between the index of the core and the effective index of the cladding, we improve the characteristic parameters such as numerical aperture and V-parameter, and reduce its bending loss to about one tenth of a conventional PCF. Implementing such a structure in PCFs may be one step forward to achieve low loss PCFs for communication applications.

  1. Fine modeling of reinforced thermoplastic filament winding container

    NASA Astrophysics Data System (ADS)

    Duan, Chenghong; Huang, Jinhao; Wu, Liang; Luo, Xiangpeng

    2018-05-01

    Reinforced thermoplastic containers has been widely used because of its corrosion-resistant, fatigue-resistant features. The characteristics of the liner and wound layer material and the different winding methods lead to the fact that the model obtained according to the ordinary pressure vessel modeling method does not reflect the actual situation of the reinforced thermoplastic container. In this paper, the thickness of stratified winding was calculated based on the principle of constant fiber total volume and equal cross-sectional area. ANSYS ACP module was used to refine the full winding container and provide a reference for engineering simulation solution.

  2. The coupled three-dimensional wave packet approach to reactive scattering

    NASA Astrophysics Data System (ADS)

    Marković, Nikola; Billing, Gert D.

    1994-01-01

    A recently developed scheme for time-dependent reactive scattering calculations using three-dimensional wave packets is applied to the D+H2 system. The present method is an extension of a previously published semiclassical formulation of the scattering problem and is based on the use of hyperspherical coordinates. The convergence requirements are investigated by detailed calculations for total angular momentum J equal to zero and the general applicability of the method is demonstrated by solving the J=1 problem. The inclusion of the geometric phase is also discussed and its effect on the reaction probability is demonstrated.

  3. Calculation of transmission probability by solving an eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Varga, Kálmán

    2010-11-01

    The electron transmission probability in nanodevices is calculated by solving an eigenvalue problem. The eigenvalues are the transmission probabilities and the number of nonzero eigenvalues is equal to the number of open quantum transmission eigenchannels. The number of open eigenchannels is typically a few dozen at most, thus the computational cost amounts to the calculation of a few outer eigenvalues of a complex Hermitian matrix (the transmission matrix). The method is implemented on a real space grid basis providing an alternative to localized atomic orbital based quantum transport calculations. Numerical examples are presented to illustrate the efficiency of the method.

  4. Alcoholism detection in magnetic resonance imaging by Haar wavelet transform and back propagation neural network

    NASA Astrophysics Data System (ADS)

    Yu, Yali; Wang, Mengxia; Lima, Dimas

    2018-04-01

    In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.

  5. On Organizing Quick Change-Over Mass Production

    NASA Astrophysics Data System (ADS)

    Petrushin, S. I.; Gubaidulina, R. H.; Gruby, S. V.; Nosirsoda, Sh C.

    2016-04-01

    The terms "type of production" and "coefficient of assigning operations" are analyzed. A new method of calculating the optimum production plan based on profit projections is suggested. We recommend using the cycle time values as initial data for designing and developing technology. On the basis of existing techniques used to convert productions we suggest a new approach to production change-over with the service life of manufacturing facilities equal to the time to product’s obsolescence. The factors to maximize profits using this change-over method are indicated, with maximum profits being a condition for the organization of quick change-change mass production.

  6. Beyond one-size-fits-all: Tailoring diversity approaches to the representation of social groups.

    PubMed

    Apfelbaum, Evan P; Stephens, Nicole M; Reagans, Ray E

    2016-10-01

    When and why do organizational diversity approaches that highlight the importance of social group differences (vs. equality) help stigmatized groups succeed? We theorize that social group members' numerical representation in an organization, compared with the majority group, influences concerns about their distinctiveness, and consequently, whether diversity approaches are effective. We combine laboratory and field methods to evaluate this theory in a professional setting, in which White women are moderately represented and Black individuals are represented in very small numbers. We expect that focusing on differences (vs. equality) will lead to greater performance and persistence among White women, yet less among Black individuals. First, we demonstrate that Black individuals report greater representation-based concerns than White women (Study 1). Next, we observe that tailoring diversity approaches to these concerns yields greater performance and persistence (Studies 2 and 3). We then manipulate social groups' perceived representation and find that highlighting differences (vs. equality) is more effective when groups' representation is moderate, but less effective when groups' representation is very low (Study 4). Finally, we content-code the diversity statements of 151 major U.S. law firms and find that firms that emphasize differences have lower attrition rates among White women, whereas firms that emphasize equality have lower attrition rates among racial minorities (Study 5). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  8. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization.

    PubMed

    Zou, Feng; Chen, Debao; Wang, Jiangtao

    2016-01-01

    An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.

  9. Time-domain digital pre-equalization for band-limited signals based on receiver-side adaptive equalizers.

    PubMed

    Zhang, Junwen; Yu, Jianjun; Chi, Nan; Chien, Hung-Chang

    2014-08-25

    We theoretically and experimentally investigate a time-domain digital pre-equalization (DPEQ) scheme for bandwidth-limited optical coherent communication systems, which is based on feedback of channel characteristics from the receiver-side blind and adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi- modulus algorithms (CMA, MMA). Based on the proposed DPEQ scheme, we theoretically and experimentally study its performance in terms of various channel conditions as well as resolutions for channel estimation, such as filtering bandwidth, taps length, and OSNR. Using a high speed 64-GSa/s DAC in cooperation with the proposed DPEQ technique, we successfully synthesized band-limited 40-Gbaud signals in modulation formats of polarization-diversion multiplexed (PDM) quadrature phase shift keying (QPSK), 8-quadrature amplitude modulation (QAM) and 16-QAM, and significant improvement in both back-to-back and transmission BER performances are also demonstrated.

  10. 40 CFR 60.4213 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...

  11. 40 CFR 60.4213 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...

  12. 40 CFR 60.4213 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...

  13. 40 CFR 60.4213 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...

  14. 40 CFR 60.4213 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... displacement of greater than or equal to 30 liters per cylinder? 60.4213 Section 60.4213 Protection of... displacement of greater than or equal to 30 liters per cylinder? Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests...

  15. Development of a New Paradigm for Analysis of Disdrometric Data

    NASA Astrophysics Data System (ADS)

    Larsen, Michael L.; Kostinski, Alexander B.

    2017-04-01

    A number of disdrometers currently on the market are able to characterize hydrometeors on a drop-by-drop basis with arrival timestamps associated with each arriving hydrometeor. This allows an investigator to parse a time series into disjoint intervals that have equal numbers of drops, instead of the traditional subdivision into equal time intervals. Such a "fixed-N" partitioning of the data can provide several advantages over the traditional equal time binning method, especially within the context of quantifying measurement uncertainty (which typically scales with the number of hydrometeors in each sample). An added bonus is the natural elimination of measurements that are devoid of all drops. This analysis method is investigated by utilizing data from a dense array of disdrometers located near Charleston, South Carolina, USA. Implications for the usefulness of this method in future studies are explored.

  16. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  17. A method for normalizing pathology images to improve feature extraction for quantitative pathology.

    PubMed

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  18. Fast polarimetric dehazing method for visibility enhancement in HSI colour space

    NASA Astrophysics Data System (ADS)

    Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin

    2017-09-01

    Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.

  19. Advanced linear and nonlinear compensations for 16QAM SC-400G unrepeatered transmission system

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Chien, Hung-Chang

    2018-02-01

    Digital signal processing (DSP) with both linear equalization and nonlinear compensations are studied in this paper for the single-carrier 400G system based on 65-GBaud 16-quadrature amplitude modulation (QAM) signals. The 16-QAM signals are generated and pre-processed with pre-equalization (Pre-EQ) and Look-up-Table (LUT) based pre-distortion (Pre-DT) at the transmitter (Tx)-side. The implementation principle of training-based equalization and pre-distortion are presented here in this paper with experimental studies. At the receiver (Rx)-side, fiber-nonlinearity compensation based on digital backward propagation (DBP) are also utilized to further improve the transmission performances. With joint LUT-based Pre-DT and DBP-based post-compensation to mitigate the opto-electronic components and fiber nonlinearity impairments, we demonstrate the unrepeatered transmission of 1.6Tb/s based on 4-lane 400G single-carrier PDM-16QAM over 205-km SSMF without distributed amplifier.

  20. Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach

    PubMed Central

    Fredenberg, Erik; Danielsson, Mats; Stayman, J. Webster; Siewerdsen, Jeffrey H.; Åslund, Magnus

    2012-01-01

    Purpose: To provide a cascaded-systems framework based on the noise-power spectrum (NPS), modulation transfer function (MTF), and noise-equivalent number of quanta (NEQ) for quantitative evaluation of differential phase-contrast imaging (Talbot interferometry) in relation to conventional absorption contrast under equal-dose, equal-geometry, and, to some extent, equal-photon-economy constraints. The focus is a geometry for photon-counting mammography. Methods: Phase-contrast imaging is a promising technology that may emerge as an alternative or adjunct to conventional absorption contrast. In particular, phase contrast may increase the signal-difference-to-noise ratio compared to absorption contrast because the difference in phase shift between soft-tissue structures is often substantially larger than the absorption difference. We have developed a comprehensive cascaded-systems framework to investigate Talbot interferometry, which is a technique for differential phase-contrast imaging. Analytical expressions for the MTF and NPS were derived to calculate the NEQ and a task-specific ideal-observer detectability index under assumptions of linearity and shift invariance. Talbot interferometry was compared to absorption contrast at equal dose, and using either a plane wave or a spherical wave in a conceivable mammography geometry. The impact of source size and spectrum bandwidth was included in the framework, and the trade-off with photon economy was investigated in some detail. Wave-propagation simulations were used to verify the analytical expressions and to generate example images. Results: Talbot interferometry inherently detects the differential of the phase, which led to a maximum in NEQ at high spatial frequencies, whereas the absorption-contrast NEQ decreased monotonically with frequency. Further, phase contrast detects differences in density rather than atomic number, and the optimal imaging energy was found to be a factor of 1.7 higher than for absorption contrast. Talbot interferometry with a plane wave increased detectability for 0.1-mm tumor and glandular structures by a factor of 3–4 at equal dose, whereas absorption contrast was the preferred method for structures larger than ∼0.5 mm. Microcalcifications are small, but differ from soft tissue in atomic number more than density, which is favored by absorption contrast, and Talbot interferometry was barely beneficial at all within the resolution limit of the system. Further, Talbot interferometry favored detection of “sharp” as opposed to “smooth” structures, and discrimination tasks by about 50% compared to detection tasks. The technique was relatively insensitive to spectrum bandwidth, whereas the projected source size was more important. If equal photon economy was added as a restriction, phase-contrast efficiency was reduced so that the benefit for detection tasks almost vanished compared to absorption contrast, but discrimination tasks were still improved close to a factor of 2 at the resolution limit. Conclusions: Cascaded-systems analysis enables comprehensive and intuitive evaluation of phase-contrast efficiency in relation to absorption contrast under requirements of equal dose, equal geometry, and equal photon economy. The benefit of Talbot interferometry was highly dependent on task, in particular detection versus discrimination tasks, and target size, shape, and material. Requiring equal photon economy weakened the benefit of Talbot interferometry in mammography. PMID:22957600

  1. High brightness laser-diode device emitting 160 watts from a 100 μm/NA 0.22 fiber.

    PubMed

    Yu, Junhong; Guo, Linui; Wu, Hualing; Wang, Zhao; Tan, Hao; Gao, Songxin; Wu, Deyong; Zhang, Kai

    2015-11-10

    A practical method of achieving a high-brightness and high-power fiber-coupled laser-diode device is demonstrated both by experiment and ZEMAX software simulation, which is obtained by a beam transformation system, free-space beam combining, and polarization beam combining based on a mini-bar laser-diode chip. Using this method, fiber-coupled laser-diode module output power from the multimode fiber with 100 μm core diameter and 0.22 numerical aperture (NA) could reach 174 W, with equalizing brightness of 14.2  MW/(cm2·sr). By this method, much wider applications of fiber-coupled laser-diodes are anticipated.

  2. Determination of fluorine in organic compounds: Microcombustion method

    USGS Publications Warehouse

    Clark, H.S.

    1951-01-01

    A reliable and widely applicable means of determining fluorine in organic compounds has long been needed. Increased interest in this field of research in recent years has intensified the need. Fluorine in organic combinations may be determined by combustion at 900?? C. in a quartz tube with a platinum catalyst, followed by an acid-base titration of the combustion products. Certain necessary precautions and known limitations are discussed in some detail. Milligram samples suffice, and the accuracy of the method is about that usually associated with the other halogen determinations. Use of this method has facilitated the work upon organic fluorine compounds in this laboratory and it should prove to be equally valuable to others.

  3. Methods of Improving Speech Intelligibility for Listeners with Hearing Resolution Deficit

    PubMed Central

    2012-01-01

    Abstract Methods developed for real-time time scale modification (TSM) of speech signal are presented. They are based on the non-uniform, speech rate depended SOLA algorithm (Synchronous Overlap and Add). Influence of the proposed method on the intelligibility of speech was investigated for two separate groups of listeners, i.e. hearing impaired children and elderly listeners. It was shown that for the speech with average rate equal to or higher than 6.48 vowels/s, all of the proposed methods have statistically significant impact on the improvement of speech intelligibility for hearing impaired children with reduced hearing resolution and one of the proposed methods significantly improves comprehension of speech in the group of elderly listeners with reduced hearing resolution. Virtual slides http://www.diagnosticpathology.diagnomx.eu/vs/2065486371761991 PMID:23009662

  4. Evaluating the Coda Phase Delay Method for Determining Temperature Ratios in Windy Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur

    2017-07-01

    We evaluate the acoustic coda phase delay method for estimating changes in atmospheric phenomena in realistic environments. Previous studies verifying the method took place in an environment with negligible wind. The equation for effective sound speed, which the method is based upon, shows that the influence of wind is equal to the square of temperature. Under normal conditions, wind is significant and therefore cannot be ignored. Results from this study con rm the previous statement. The acoustic coda phase delay method breaks down in non-ideal environments, namely those where wind speed and direction varies across small distances. We suggest thatmore » future studies make use of gradiometry to better understand the effect of wind on the acoustic coda and subsequent phase delays.« less

  5. Gender equality in couples and self-rated health - A survey study evaluating measurements of gender equality and its impact on health.

    PubMed

    Sörlin, Ann; Lindholm, Lars; Ng, Nawi; Ohman, Ann

    2011-08-26

    Men and women have different patterns of health. These differences between the sexes present a challenge to the field of public health. The question why women experience more health problems than men despite their longevity has been discussed extensively, with both social and biological theories being offered as plausible explanations. In this article, we focus on how gender equality in a partnership might be associated with the respondents' perceptions of health. This study was a cross-sectional survey with 1400 respondents. We measured gender equality using two different measures: 1) a self-reported gender equality index, and 2) a self-perceived gender equality question. The aim of comparison of the self-reported gender equality index with the self-perceived gender equality question was to reveal possible disagreements between the normative discourse on gender equality and daily practice in couple relationships. We then evaluated the association with health, measured as self-rated health (SRH). With SRH dichotomized into 'good' and 'poor', logistic regression was used to assess factors associated with the outcome. For the comparison between the self-reported gender equality index and self-perceived gender equality, kappa statistics were used. Associations between gender equality and health found in this study vary with the type of gender equality measurement. Overall, we found little agreement between the self-reported gender equality index and self-perceived gender equality. Further, the patterns of agreement between self-perceived and self-reported gender equality were quite different for men and women: men perceived greater gender equality than they reported in the index, while women perceived less gender equality than they reported. The associations to health were depending on gender equality measurement used. Men and women perceive and report gender equality differently. This means that it is necessary not only to be conscious of the methods and measurements used to quantify men's and women's opinions of gender equality, but also to be aware of the implications for health outcomes.

  6. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  7. Fair Equality of Opportunity in Our Actual World

    ERIC Educational Resources Information Center

    Sachs, Benjamin

    2016-01-01

    Fair equality of opportunity, a principle that governs the competition for desirable jobs, can seem irrelevant in our actual world, for two reasons. First, parents have broad liberty to raise their children as they see fit, which seems to undermine the fair equality of opportunity-based commitment to eliminating the effects of social circumstances…

  8. Computational compliance criteria in water hammer modelling

    NASA Astrophysics Data System (ADS)

    Urbanowicz, Kamil

    2017-10-01

    Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  9. Time-Delay Interferometry for Space-based Gravitational Wave Searches

    NASA Technical Reports Server (NTRS)

    Armstrong, J.; Estabrook, F.; Tinto, M.

    1999-01-01

    Ground-based, equal-arm-length laser interferometers are being built to measure high-frequency astrophysical graviatational waves. Because of the arm-length equality, laser light experiences the same delay in each arm and thus phase or frequency noise from the laser itself precisely cancels at the photodetector.

  10. Calculation of Absorbed Dose in Target Tissue and Equivalent Dose in Sensitive Tissues of Patients Treated by BNCT Using MCNP4C

    NASA Astrophysics Data System (ADS)

    Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini

    Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.

  11. Recommended GIS Analysis Methods for Global Gridded Population Data

    NASA Astrophysics Data System (ADS)

    Frye, C. E.; Sorichetta, A.; Rose, A.

    2017-12-01

    When using geographic information systems (GIS) to analyze gridded, i.e., raster, population data, analysts need a detailed understanding of several factors that affect raster data processing, and thus, the accuracy of the results. Global raster data is most often provided in an unprojected state, usually in the WGS 1984 geographic coordinate system. Most GIS functions and tools evaluate data based on overlay relationships (area) or proximity (distance). Area and distance for global raster data can be either calculated directly using the various earth ellipsoids or after transforming the data to equal-area/equidistant projected coordinate systems to analyze all locations equally. However, unlike when projecting vector data, not all projected coordinate systems can support such analyses equally, and the process of transforming raster data from one coordinate space to another often results unmanaged loss of data through a process called resampling. Resampling determines which values to use in the result dataset given an imperfect locational match in the input dataset(s). Cell size or resolution, registration, resampling method, statistical type, and whether the raster represents continuous or discreet information potentially influence the quality of the result. Gridded population data represent estimates of population in each raster cell, and this presentation will provide guidelines for accurately transforming population rasters for analysis in GIS. Resampling impacts the display of high resolution global gridded population data, and we will discuss how to properly handle pyramid creation using the Aggregate tool with the sum option to create overviews for mosaic datasets.

  12. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  13. Slope angle estimation method based on sparse subspace clustering for probe safe landing

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cao, Yunfeng; Ding, Meng; Zhuang, Likui

    2018-06-01

    To avoid planetary probes landing on steep slopes where they may slip or tip over, a new method of slope angle estimation based on sparse subspace clustering is proposed to improve accuracy. First, a coordinate system is defined and established to describe the measured data of light detection and ranging (LIDAR). Second, this data is processed and expressed with a sparse representation. Third, on this basis, the data is made to cluster to determine which subspace it belongs to. Fourth, eliminating outliers in subspace, the correct data points are used for the fitting planes. Finally, the vectors normal to the planes are obtained using the plane model, and the angle between the normal vectors is obtained through calculation. Based on the geometric relationship, this angle is equal in value to the slope angle. The proposed method was tested in a series of experiments. The experimental results show that this method can effectively estimate the slope angle, can overcome the influence of noise and obtain an exact slope angle. Compared with other methods, this method can minimize the measuring errors and further improve the estimation accuracy of the slope angle.

  14. 41 CFR 60-2.11 - Organizational profile.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE PROGRAMS, EQUAL EMPLOYMENT OPPORTUNITY, DEPARTMENT OF LABOR... establishment. It is one method contractors use to determine whether barriers to equal employment opportunity... that may assist in identifying organizational units where women or minorities are underrepresented or...

  15. Synchronized manufacture of composites knowledge study (SMACKS)

    NASA Astrophysics Data System (ADS)

    Strickland, B.; Oliver, M.

    1990-06-01

    The need for a competitive manufacturing knowledge base for the composites industry, encompasses a change from a 'functionally' organized factory to a product-based organization, and has led to major reductions in inventories, manufacturing costs and cycle times. The net effect was that products became more price- and delivery-competitive. It is believed that composite manufacturers have an equal need to improve their competitive edge, particularly as the demand for composite products grows and more manufacturers enter the marketplace. 'SMACKS' has begun to establish these needs and market trends, with a view to establishing the advantages offered to composite manufacturers by synchronized manufacturing methods.

  16. Detection of bacteriuria and pyuria by URISCREEN a rapid enzymatic screening test.

    PubMed Central

    Pezzlo, M T; Amsterdam, D; Anhalt, J P; Lawrence, T; Stratton, N J; Vetter, E A; Peterson, E M; de la Maza, L M

    1992-01-01

    A multicenter study was performed to evaluate the ability of the URISCREEN (Analytab Products, Plainview, N.Y.), a 2-min catalase tube test, to detect bacteriuria and pyuria. This test was compared with the Chemstrip LN (BioDynamics, Division of Boehringer Mannheim Diagnostics, Indianapolis, Ind.), a 2-min enzyme dipstick test; a semiquantitative plate culture method was used as the reference test for bacteriuria, and the Gram stain or a quantitative chamber count method was used as the reference test for pyuria. Each test was evaluated for its ability to detect probable pathogens at greater than or equal to 10(2) CFU/ml and/or greater than or equal to 1 leukocyte per oil immersion field, as determined by the Gram stain method, or greater than 10 leukocytes per microliter, as determined by the quantitative count method. A total of 1,500 urine specimens were included in this evaluation. There were 298 specimens with greater than or equal 10(2) CFU/ml and 451 specimens with pyuria. Of the 298 specimens with probable pathogens isolated at various colony counts, 219 specimens had colony counts of greater than or equal to 10(5) CFU/ml, 51 specimens had between 10(4) and 10(5) CFU/ml, and 28 specimens had between 10(2) and less than 10(4) CFU/ml. Both the URISCREEN and the Chemstrip LN detected 93% (204 of 219) of the specimens with probable pathogens at greater than or equal to 10(5) CFU/ml. For the specimens with probable pathogens at greater than or equal to 10(2) CFU/ml, the sensitivities of the URISCREEN and the Chemstrip LN were 86% (256 of 298) and 81% (241 of 298), respectively. Of the 451 specimens with pyuria, the URISCREEN detected 88% (398 of 451) and Chemstrip LN detected 78% (350 if 451). There were 204 specimens with both greater than or equal to 10(2) CFU/ml and pyuria; the sensitivities of both methods were 95% (193 of 204) for these specimens. Overall, there were 545 specimens with probable pathogens at greater than or equal to 10(2) CFU/ml and/or pyuria. The URISCREEN detected 85% (461 of 545), and the Chemstrip LN detected 73% (398 of 545). A majority (76%) of the false-negative results obtained with either method were for specimens without leukocytes in the urine. There were 955 specimens with no probable pathogens or leukocytes. Of these, 28% (270 of 955) were found positive by the URISCREEN and 13% (122 of 955) were found positive by the Chemstrip LN. A majority of the false-positive results were probably due, in part, to the detection of enzymes present in both bacterial and somatic cells by each of the test systems. Overall, the URISCREEN is rapid, manual, easy-to-perform enzymatic test that yields findings similar to those yielded by the Chemstrip LN for specimens with both greater than or equal to 10(2) CFU/ml and pyuria or for specimens with greater than or equal to 10(5) CFU/ml and with or without pyuria. However, when the data were analyzed for either probable pathogens at less 10(5) CFU/ml or pyuria, the sensitivity of the URISCREEN was higher (P less than 0.05). PMID:1551986

  17. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  18. A new approach based on off-line coupling of high-performance liquid chromatography with gas chromatography-mass spectrometry to determine acrylamide in coffee brew.

    PubMed

    Blanch, Gracia Patricia; Morales, Francisco José; Moreno, Fernando de la Peña; del Castillo, María Luisa Ruiz

    2013-01-01

    A new method based on off-line coupling of LC with GC in replacement of conventional sample preparation techniques is proposed to analyze acrylamide in coffee brews. The method involves the preseparation of the sample by LC, the collection of the selected fraction, its concentration under nitrogen, and subsequent analysis by GC coupled with MS. The composition of the LC mobile phase and the flow rate were studied to select those conditions that allowed separation of acrylamide without coeluting compounds. Under the conditions selected recoveries close to 100% were achieved while LODs and LOQs equal to 5 and 10 μg/L for acrylamide in brewed coffee were obtained. The method developed enabled the reliable detection of acrylamide in spiked coffee beverage samples without further clean-up steps or sample manipulation. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A novel fault location scheme for power distribution system based on injection method and transient line voltage

    NASA Astrophysics Data System (ADS)

    Huang, Yuehua; Li, Xiaomin; Cheng, Jiangzhou; Nie, Deyu; Wang, Zhuoyuan

    2018-02-01

    This paper presents a novel fault location method by injecting travelling wave current. The new methodology is based on Time Difference Of Arrival(TDOA)measurement which is available measurements the injection point and the end node of main radial. In other words, TDOA is the maximum correlation time when the signal reflected wave crest of the injected and fault appear simultaneously. Then distance calculation is equal to the wave velocity multiplied by TDOA. Furthermore, in case of some transformers connected to the end of the feeder, it’s necessary to combine with the transient voltage comparison of amplitude. Finally, in order to verify the effectiveness of this method, several simulations have been undertaken by using MATLAB/SIMULINK software packages. The proposed fault location is useful to short the positioning time in the premise of ensuring the accuracy, besides the error is 5.1% and 13.7%.

  20. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.

    PubMed

    Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C

    2018-06-01

    Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.

  1. Methods and apparatuses for deoxygenating pyrolysis oil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baird, Lance Awender; Brandvold, Timothy A.; Frey, Stanley Joseph

    Methods and apparatuses are provided for deoxygenating pyrolysis oil. A method includes contacting a pyrolysis oil with a deoxygenation catalyst in a first reactor at deoxygenation conditions to produce a first reactor effluent. The first reactor effluent has a first oxygen concentration and a first hydrogen concentration, based on hydrocarbons in the first reactor effluent, and the first reactor effluent includes an aromatic compound. The first reactor effluent is contacted with a dehydrogenation catalyst in a second reactor at conditions that deoxygenate the first reactor effluent while preserving the aromatic compound to produce a second reactor effluent. The second reactormore » effluent has a second oxygen concentration lower than the first oxygen concentration and a second hydrogen concentration that is equal to or lower than the first hydrogen concentration, where the second oxygen concentration and the second hydrogen concentration are based on the hydrocarbons in the second reactor effluent.« less

  2. Neighboring extremals of dynamic optimization problems with path equality constraints

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  3. Groundwater Evapotranspiration from Diurnal Water Table Fluctuation: a Modified White Based Method Using Drainable and Fillable Porosity

    NASA Astrophysics Data System (ADS)

    Acharya, S.; Mylavarapu, R.; Jawitz, J. W.

    2012-12-01

    In shallow unconfined aquifers, the water table usually shows a distinct diurnal fluctuation pattern corresponding to the twenty-four hour solar radiation cycle. This diurnal water table fluctuation (DWTF) signal can be used to estimate the groundwater evapotranspiration (ETg) by vegetation, a method known as the White [1932] method. Water table fluctuations in shallow phreatic aquifers is controlled by two distinct storage parameters, drainable porosity (or specific yield) and the fillable porosity. Yet, it is implicitly assumed in most studies that these two parameters are equal, unless hysteresis effect is considered. The White based method available in the literature is also based on a single drainable porosity parameter to estimate the ETg. In this study, we present a modification of the White based method to estimate ETg from DWTF using separate drainable (λd) and fillable porosity (λf) parameters. Separate analytical expressions based on successive steady state moisture profiles are used to estimate λd and λf, instead of the commonly employed hydrostatic moisture profile approach. The modified method is then applied to estimate ETg using the DWTF data observed in a field in northeast Florida and the results are compared with ET estimations from the standard Penman-Monteith equation. It is found that the modified method resulted in significantly better estimates of ETg than the previously available method that used only a single, hydrostatic-moisture-profile based λd. Furthermore, the modified method is also used to estimate ETg even during rainfall events which produced significantly better estimates of ETg as compared to the single λd parameter method.

  4. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  5. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  6. Contact-free palm-vein recognition based on local invariant features.

    PubMed

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach.

  7. Contact-Free Palm-Vein Recognition Based on Local Invariant Features

    PubMed Central

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach. PMID:24866176

  8. Palladium and platinum-based nanoparticle functional sensor layers for selective H2 sensing

    DOEpatents

    Ohodnicki, Jr., Paul R.; Baltrus, John P.; Brown, Thomas D.

    2017-07-04

    The disclosure relates to a plasmon resonance-based method for H.sub.2 sensing in a gas stream utilizing a hydrogen sensing material. The hydrogen sensing material is comprises Pd-based or Pt-based nanoparticles having an average nanoparticle diameter of less than about 100 nanometers dispersed in an inert matrix having a bandgap greater than or equal to 5 eV, and an oxygen ion conductivity less than approximately 10.sup.-7 S/cm at a temperature of 700.degree. C. Exemplary inert matrix materials include SiO.sub.2, Al.sub.2O.sub.3, and Si.sub.3N.sub.4 as well as modifications to modify the effective refractive indices through combinations and/or doping of such materials. The hydrogen sensing material utilized in the method of this disclosure may be prepared using means known in the art for the production of nanoparticles dispersed within a supporting matrix including sol-gel based wet chemistry techniques, impregnation techniques, implantation techniques, sputtering techniques, and others.

  9. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods.

    PubMed

    Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.

  10. Time Reversal Acoustic Communication Using Filtered Multitone Modulation

    PubMed Central

    Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo

    2015-01-01

    The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process. PMID:26393586

  11. Time Reversal Acoustic Communication Using Filtered Multitone Modulation.

    PubMed

    Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo

    2015-09-17

    The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process.

  12. Proceedings of the Conference on Moments and Signal

    NASA Astrophysics Data System (ADS)

    Purdue, P.; Solomon, H.

    1992-09-01

    The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.

  13. Composite superconducting wires obtained by high-rate tinning in molten Bi-Pb-Sr-Ca-Cu-O system

    NASA Technical Reports Server (NTRS)

    Grosav, A. D.; Konopko, L. A.; Leporda, N. I.

    1991-01-01

    Long lengths of metal superconductor composites were prepared by passing a copper wire through the bismuth based molten oxide system at a constant speed. The key to successful composite preparation is the high pulling speed involved, which permits minimization of the severe interaction between the unbuffered metal surface and the oxide melt. Depending on the temperature of the melt and the pulling speed, a coating with different thickness and microstructure appeared. The nonannealed thick coatings contained a Bi2(Sr,Ca)2Cu1O6 phase as a major component. After relatively short time annealing at 800 C, both resistivity and initial magnetization versus temperature measurements show superconducting transitions beginning in the 110 to 115 K region. The effects of annealing and composition on obtained results are discussed. This method of manufacture led to the fabrication of wire with a copper core in a dense covering with uniform thickness of about h approximately equal to 5 to 50 microns. Composite wires with h approximately equal to 10 microns (h/d approximately equal to 0.1) sustained bending on a 15 mm radius frame without cracking during flexing.

  14. A novel quantified bitterness evaluation model for traditional Chinese herbs based on an animal ethology principle.

    PubMed

    Han, Xue; Jiang, Hong; Han, Li; Xiong, Xi; He, Yanan; Fu, Chaomei; Xu, Runchun; Zhang, Dingkun; Lin, Junzhi; Yang, Ming

    2018-03-01

    Traditional Chinese herbs (TCH) are currently gaining attention in disease prevention and health care plans. However, their general bitter taste hinders their use. Despite the development of a variety of taste evaluation methods, it is still a major challenge to establish a quantitative detection technique that is objective, authentic and sensitive. Based on the two-bottle preference test (TBP), we proposed a novel quantitative strategy using a standardized animal test and a unified quantitative benchmark. To reduce the difference of results, the methodology of TBP was optimized. The relationship between the concentration of quinine and animal preference index (PI) was obtained. Then the PI of TCH was measured through TBP, and bitterness results were converted into a unified numerical system using the relationship of concentration and PI. To verify the authenticity and sensitivity of quantified results, human sensory testing and electronic tongue testing were applied. The quantified results showed a good discrimination ability. For example, the bitterness of Coptidis Rhizoma was equal to 0.0579 mg/mL quinine, and Nelumbinis Folium was equal to 0.0001 mg/mL. The validation results proved that the new assessment method for TCH was objective and reliable. In conclusion, this study provides an option for the quantification of bitterness and the evaluation of taste masking effects.

  15. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology imagesmore » by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.« less

  16. Electron-phonon scattering rates in complex polar crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prange, M. P.; Campbell, L. W.; Kerisit, S.

    2017-09-01

    The thermalization of fast electrons by phonons is studied in CsI, NaI, SrI2, and YAP. This numerical study uses an improvement to a recently developed ab initio method based on a density functional perturbation theoretical description of the phonon modes that provides a way to go beyond widely used phonon models based on binary crystals. Improvements to this method are described, and scattering rates are presented and discussed. The results here treat polar and nonpolar scattering on equal footing and allow an assessment of the relative importance of the two types of scattering. The relative activity of the numerous phononmore » modes in materials with complicated structures is discussed, and a simple criterion for finding the modes that scatter strongly is presented.« less

  17. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    NASA Astrophysics Data System (ADS)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  18. Geometric Model of Induction Heating Process of Iron-Based Sintered Materials

    NASA Astrophysics Data System (ADS)

    Semagina, Yu V.; Egorova, M. A.

    2018-03-01

    The article studies the issue of building multivariable dependences based on the experimental data. A constructive method for solving the issue is presented in the form of equations of (n-1) – surface compartments of the extended Euclidean space E+n. The dimension of space is taken to be equal to the sum of the number of parameters and factors of the model of the system being studied. The basis for building multivariable dependencies is the generalized approach to n-space used for the surface compartments of 3D space. The surface is designed on the basis of the kinematic method, moving one geometric object along a certain trajectory. The proposed approach simplifies the process aimed at building the multifactorial empirical dependencies which describe the process being investigated.

  19. Web-based versus traditional lecture: are they equally effective as a flexible bronchoscopy teaching method?

    PubMed

    Mata, Caio Augusto Sterse; Ota, Luiz Hirotoshi; Suzuki, Iunis; Telles, Adriana; Miotto, Andre; Leão, Luiz Eduardo Vilaça

    2012-01-01

    This study compares the traditional live lecture to a web-based approach in the teaching of bronchoscopy and evaluates the positive and negative aspects of both methods. We developed a web-based bronchoscopy curriculum, which integrates texts, images and animations. It was applied to first-year interns, who were later administered a multiple-choice test. Another group of eight first-year interns received the traditional teaching method and the same test. The two groups were compared using the Student's t-test. The mean scores (± SD) of students who used the website were 14.63 ± 1.41 (range 13-17). The test scores of the other group had the same range, with a mean score of 14.75 ± 1. The Student's t-test showed no difference between the test results. The common positive point noted was the presence of multimedia content. The web group cited as positive the ability to review the pages, and the other one the role of the teacher. Web-based bronchoscopy education showed results similar to the traditional live lecture in effectiveness.

  20. Human Rights and Cosmopolitan Democratic Education

    ERIC Educational Resources Information Center

    Snauwaert, Dale T.

    2009-01-01

    The foundation upon which this discussion is based is the basic nature of democracy as both a political and moral ideal. Democracy can be understood as a system of rights premised upon the logic of equality. At its core is a fundamental belief in moral equality, a belief that all human beings possess an equal inherent dignity or worth. The ideal…

  1. Equality of Opportunities, Divergent Conceptualisations and Their Implications for Early Childhood Care and Education Policies

    ERIC Educational Resources Information Center

    Morabito, Christian; Vandenbroeck, Michel

    2015-01-01

    This article aims to explore the relations between equality of opportunity and early childhood. By referring to the work of contemporary philosophers, i.e. Rawls, Sen, Dworkin, Cohen and Roemer, we argue for different possible interpretations, based on political discussions, concerning how to operationalize equality of opportunities. We represent…

  2. Open Minds to Equality: A Source Book of Learning Activities to Affirm Diversity and Promote Equity. Third Edition

    ERIC Educational Resources Information Center

    Schniedewind, Nancy; Davidson, Ellen

    2006-01-01

    "Open Minds to Equality" is an educator's sourcebook of activities to help students understand and change inequalities based on: race, gender, class, age, language, sexual orientation, physical/mental ability, and religion. The activities promote respect for diversity and interpersonal equality among students, fostering a classroom that is…

  3. 29 CFR 1620.13 - “Equal Work”-What it means.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sex in the wages paid for “equal work on jobs the performance of which requires equal skill, effort... practices indicate a pay practice of discrimination based on sex. It should also be noted that it is an... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...

  4. 29 CFR 1620.13 - “Equal Work”-What it means.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sex in the wages paid for “equal work on jobs the performance of which requires equal skill, effort... practices indicate a pay practice of discrimination based on sex. It should also be noted that it is an... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...

  5. A Scorecard on Gender Equality and Girls' Education in Asia, 1990-2000. Advocacy Brief

    ERIC Educational Resources Information Center

    Unterhalter, Elaine; Rajagopalan, Rajee; Challender, Chloe

    2005-01-01

    Background: Existing measures for access to and efficiency in the school system are very limited as measures of gender equality, even though there have been marked improvements in sex-disaggregated data. A methodology for developing a scorecard which measures gender equality in schooling and education partly based on Amartya Sen's capability…

  6. Data pieces-based parameter identification for lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zou, Yuan; Sun, Fengchun; Hu, Xiaosong; Yu, Yang; Feng, Sen

    2016-10-01

    Battery characteristics vary with temperature and aging, it is necessary to identify battery parameters periodically for electric vehicles to ensure reliable State-of-Charge (SoC) estimation, battery equalization and safe operation. Aiming for on-board applications, this paper proposes a data pieces-based parameter identification (DPPI) method to identify comprehensive battery parameters including capacity, OCV (open circuit voltage)-Ah relationship and impedance-Ah relationship simultaneously only based on battery operation data. First a vehicle field test was conducted and battery operation data was recorded, then the DPPI method is elaborated based on vehicle test data, parameters of all 97 cells of the battery package are identified and compared. To evaluate the adaptability of the proposed DPPI method, it is used to identify battery parameters of different aging levels and different temperatures based on battery aging experiment data. Then a concept of ;OCV-Ah aging database; is proposed, based on which battery capacity can be identified even though the battery was never fully charged or discharged. Finally, to further examine the effectiveness of the identified battery parameters, they are used to perform SoC estimation for the test vehicle with adaptive extended Kalman filter (AEKF). The result shows good accuracy and reliability.

  7. Free-space optics mode-wavelength division multiplexing system using LG modes based on decision feedback equalization

    NASA Astrophysics Data System (ADS)

    Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras

    2017-11-01

    A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.

  8. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  9. A time and frequency synchronization method for CO-OFDM based on CMA equalizers

    NASA Astrophysics Data System (ADS)

    Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum

    2018-06-01

    In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.

  10. Surface and allied studies in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.

    1984-01-01

    Measuring small-signal admittance versus frequency and forward bias voltage together with a new transient measurement apparently provides the most reliable and flexible method available for determining back surface recombination velocity and low-injection lifetime of the quasineutral base region of silicon solar cells. The new transient measurement reported here is called short-circuit-current decay (SCCD). In this method, forward voltage equal to about the open-circuit or the maximum power voltage establishes excess holes and electrons in the junction transition region and in the quasineutral regions. The sudden application of a short circuit causes an exiting of the excess holes and electrons in the transition region within about ten picoseconds. From observing the slope and intercept of the subsequent current decay, the base lifetime and surface recombination velocity can be determined. The admittance measurement previously mentioned then enters to increase accuracy particularly for devices for which the diffusion length exceeds the base thickness.

  11. Quality and Equality in Internet-Based Higher Education: Benchmarks for Success.

    ERIC Educational Resources Information Center

    Merisotis, Jamie P.

    The Institute for Higher Education Policy reviewed the research on quality and equality in Internet-based higher education and found a relative paucity of original research dedicated to explaining or predicting phenomena related to distance learning. The research that does exist has tended to emphasize student outcomes for individual courses,…

  12. An Examination of Alternative Poverty Measures for the Wisconsin Equalization Aid Formula.

    ERIC Educational Resources Information Center

    Cibulka, James G.

    1986-01-01

    Wisconsin's guaranteed tax base equalization formula has no direct adjustment for the additional costs of educating poverty level pupils. This paper establishes the need for an adjustment and examines three measures (based on varying poverty definitions) to determine which provides the most equitable funding formula for educating poor children. (9…

  13. Analysis and iterative equalization of transient and adiabatic chirp effects in DML-based OFDM transmission systems.

    PubMed

    Wei, Chia-Chien

    2012-11-05

    This work theoretically studies the transmission performance of a DML-based OFDM system by small-signal approximation, and the model considers both the transient and adiabatic chirps. The dispersion-induced distortion is modeled as subcarrier-to-subcarrier intermixing interference (SSII), and the theoretical SSII agrees with the distortion obtained from large-signal simulation statistically and deterministically. The analysis shows that the presence of the adiabatic chirp will ease power fading or even provide gain, but will increase the SSII to deteriorate OFDM signals after dispersive transmission. Furthermore, this work also proposes a novel iterative equalization to eliminate the SSII. From the simulation, the distortion could be effectively mitigated by the proposed equalization such that the maximum transmission distance of the DML-based OFDM signal is significantly improved. For instance, the transmission distance of a 30-Gbps DML-based OFDM signal can be extended from 10 km to more than 100 km. Besides, since the dispersion-induced distortion could be effectively mitigated by the equalization, negative power penalties are observed at some distances due to chirp-induced power gain.

  14. Developing a Tool for Increasing the Awareness about Gendered and Intersectional Processes in the Clinical Assessment of Patients – A Study of Pain Rehabilitation

    PubMed Central

    Hammarström, Anne; Wiklund, Maria; Stålnacke, Britt-Marie; Lehti, Arja; Haukenes, Inger; Fjellman-Wiklund, Anncristine

    2016-01-01

    Objective There is a need for tools addressing gender inequality in the everyday clinical work in health care. The aim of our paper was to develop a tool for increasing the awareness of gendered and intersectional processes in clinical assessment of patients, based on a study of pain rehabilitation. Methods In the overarching project named “Equal care in rehabilitation” we used multiple methods (both quantitative and qualitative) in five sub studies. With a novel approach we used Grounded Theory in order to synthesize the results from our sub studies, in order to develop the gender equality tool. The gender equality tool described and developed in this article is thus based on results from sub studies about the processes of assessment and selection of patients in pain rehabilitation. Inspired by some questions in earlier tools, we posed open ended questions and inductively searched for findings and concepts relating to gendered and social selection processes in pain rehabilitation, in each of our sub studies. Through this process, the actual gender equality tool was developed as 15 questions about the process of assessing and selecting patients to pain rehabilitation. As a more comprehensive way of understanding the tool, we performed a final step of the GT analyses. Here we synthesized the results of the tool into a comprehensive model with two dimensions in relation to several possible discrimination axes. Results The process of assessing and selecting patients was visualized as a funnel, a top down process governed by gendered attitudes, rules and structures. We found that the clinicians judged inner and outer characteristics and status of patients in a gendered and intersectional way in the process of clinical decision-making which thus can be regarded as (potentially) biased with regard to gender, socio-economic status, ethnicity and age. Implications The clinical implications of our tool are that the tool can be included in the systematic routine of clinical assessment of patients for both awareness raising and as a base for avoiding gender bias in clinical decision-making. The tool could also be used in team education for health professionals as an instrument for critical reflection on gender bias. Conclusions Thus, tools for clinical assessment can be developed from empirical studies in various clinical settings. However, such a micro-level approach must be understood from a broader societal perspective including gender relations on both the macro- and the meso-level. PMID:27055029

  15. Complexation of copper by aquatic humic substances from different environments

    USGS Publications Warehouse

    McKnight, Diane M.; Feder, Gerald L.; Thurman, E. Michael; Wershaw, Robert L.

    1983-01-01

    The copper-complexing properties of aquatic humic substances isolated from eighteen different environments were characterized by potentiometric titration, using a cupric ion selective electrode. Potentiometric data were analyzed using FITEQL, a computer program for the determination of chemical equilibrium constants from experimental data. All the aquatic humic substances could be modelled as having two types of Cu(II)-binding sites: one with K equal to about 106 and a concentration of 1.0 ± 0.4 × 10−6 M(mg C)−1 and another with K equal to about 108 and a concentration of 2.6 ± 1.6 × 10−7 M(mg C)−1.A method is described for estimating the Cu(II)-binding sites associated with dissolved humic substances in natural water based on a measurement of dissolved organic carbon, which may be helpful in evaluating chemical processes controlling speciation of Cu and bioavailability of Cu to aquatic organisms.

  16. Taiwanese consumer survey data for investigating the role of information on equivalence of organic standards in directing food choice.

    PubMed

    Yeh, Ching-Hua; Hartmann, Monika; Hirsch, Stefan

    2018-06-01

    The presentation of credence attributes such as the product's origin or the production method has a significant influence on consumers' food purchase decisions. The dataset includes survey responses from a discrete choice experiment with 1309 food shoppers in Taiwan using the example of sweet pepper. The survey was carried out in 2014 in the three largest Taiwanese cities. It evaluates the impact of providing information on the equality of organic standards on consumers' preferences at the example of sweet pepper. Equality of organic standards implies that regardless of products' country-of-origin (COO) organic certifications are based on the same production regulation and managerial processes. Respondents were randomly allocated to the information treatment and the control group. The dataset contains the product choices of participants in both groups, as well as their sociodemographic information.

  17. A new constitutive analysis of hexagonal close-packed metal in equal channel angular pressing by crystal plasticity finite element method

    NASA Astrophysics Data System (ADS)

    Li, Hejie; Öchsner, Andreas; Yarlagadda, Prasad K. D. V.; Xiao, Yin; Furushima, Tsuyoshi; Wei, Dongbin; Jiang, Zhengyi; Manabe, Ken-ichi

    2018-01-01

    Most of hexagonal close-packed (HCP) metals are lightweight metals. With the increasing application of light metal products, the production of light metal is increasingly attracting the attentions of researchers worldwide. To obtain a better understanding of the deformation mechanism of HCP metals (especially for Mg and its alloys), a new constitutive analysis was carried out based on previous research. In this study, combining the theories of strain gradient and continuum mechanics, the equal channel angular pressing process is analyzed and a HCP crystal plasticity constitutive model is developed especially for Mg and its alloys. The influence of elevated temperature on the deformation mechanism of the Mg alloy (slip and twin) is novelly introduced into a crystal plasticity constitutive model. The solution for the new developed constitutive model is established on the basis of the Lagrangian iterations and Newton Raphson simplification.

  18. Polarization holograms in a bifunctional amorphous polymer exhibiting equal values of photoinduced linear and circular birefringences.

    PubMed

    Provenzano, Clementina; Pagliusi, Pasquale; Cipparrone, Gabriella; Royes, Jorge; Piñol, Milagros; Oriol, Luis

    2014-10-09

    Light-controlled molecular alignment is a flexible and useful strategy introducing novelty in the fields of mechanics, self-organized structuring, mass transport, optics, and photonics and addressing the development of smart optical devices. Azobenzene-containing polymers are well-known photocontrollable materials with large and reversible photoinduced optical anisotropies. The vectorial holography applied to these materials enables peculiar optical devices whose properties strongly depend on the relative values of the photoinduced birefringences. Here is reported a polarization holographic recording based on the interference of two waves with orthogonal linear polarization on a bifunctional amorphous polymer that, exceptionally, exhibits equal values of linear and circular birefringence. The peculiar photoresponse of the material coupled with the holographic technique demonstrates an optical device capable of decomposing the light into a set of orthogonally polarized linear components. The holographic structures are theoretically described by the Jones matrices method and experimentally investigated.

  19. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    NASA Astrophysics Data System (ADS)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  20. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  1. [Participatory research : Meaning, concept, objectives and methods].

    PubMed

    Brütt, Anna Levke; Buschmann-Steinhage, Rolf; Kirschning, Silke; Wegscheider, Karl

    2016-09-01

    Shaping one's own life and feeling equal in society is an essential aspect of participation. Based on the UN Convention on the Rights of Persons with Disabilities, the Social Security Code IX and the International Classification of Functioning, Disability and Health (ICF), participation is relevant for the German health system. The cross-sectional discipline of participation research investigates conditions for self-determined and equal participation in society. Research results can reinforce and promote the participation of humans with disabilities. Participation research uses established quantitative and qualitative approaches. Moreover, participatory research is a relevant approach that demands involving persons with disabilities in decisions in the research process. In the future, it will be important to concentrate findings and to connect researchers. The participation research action alliance (Aktionsbündnis Teilhabeforschung), which was established in 2015, aims to make funding accessible as well as strengthen and profile participation research.

  2. Gender discrimination and nursing: α literature review.

    PubMed

    Kouta, Christiana; Kaite, Charis P

    2011-01-01

    This article aims to examine gender stereotypes in relation to men in nursing, discuss gender discrimination cases in nursing, and explore methods used for promoting equal educational opportunities during nursing studies. The literature review was based on related databases, such as CINAHL, Science Direct, MEDLINE, and EBSCO. Legal case studies are included in order to provide a more practical example of those barriers existing for men pursuing nursing, as well as statistical data concerning gender discrimination and male attrition to nursing schools in relation to those barriers. These strengthen the validity of the manuscript. Literature review showed that gender discrimination is still prevalent within nursing profession. Nursing faculty should prepare male nursing students to interact effectively with female clients as well. Role modeling the therapeutic relationship with clients is one strategy that may help male students. In general, the faculty should provide equal learning opportunities to nursing students. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.

  4. Closed-form solution for static pull-in voltage of electrostatically actuated clamped-clamped micro/nano beams under the effect of fringing field and van der Waals force

    NASA Astrophysics Data System (ADS)

    Bhojawala, V. M.; Vakharia, D. P.

    2017-12-01

    This investigation provides an accurate prediction of static pull-in voltage for clamped-clamped micro/nano beams based on distributed model. The Euler-Bernoulli beam theory is used adapting geometric non-linearity of beam, internal (residual) stress, van der Waals force, distributed electrostatic force and fringing field effects for deriving governing differential equation. The Galerkin discretisation method is used to make reduced-order model of the governing differential equation. A regime plot is presented in the current work for determining the number of modes required in reduced-order model to obtain completely converged pull-in voltage for micro/nano beams. A closed-form relation is developed based on the relationship obtained from curve fitting of pull-in instability plots and subsequent non-linear regression for the proposed relation. The output of regression analysis provides Chi-square (χ 2) tolerance value equals to 1  ×  10-9, adjusted R-square value equals to 0.999 29 and P-value equals to zero, these statistical parameters indicate the convergence of non-linear fit, accuracy of fitted data and significance of the proposed model respectively. The closed-form equation is validated using available data of experimental and numerical results. The relative maximum error of 4.08% in comparison to several available experimental and numerical data proves the reliability of the proposed closed-form equation.

  5. Gender Inequality in the Couple Relationship and Leisure-Based Physical Exercise

    PubMed Central

    Annandale, Ellen; Hammarström, Anne

    2015-01-01

    Aims To analyse whether gender inequality in the couple relationship was related to leisure-based physical activity, after controlling for earlier physical activity and confounders. Methods Data drawn from the Northern Swedish Cohort of all pupils in their final year of compulsory schooling in a town in the North of Sweden. The sample consisted of 772 respondents (n = 381 men, n = 391 women) in the 26-year follow-up (in 2007, aged 42) who were either married or cohabiting. Ordinal regression, for men and women separately, was used to assess the association between gender inequality (measured as self-perceived equality in the couple relationship using dummy variables) and a measure of exercise frequency, controlling for prior exercise frequency, socioeconomic status, the presence of children in the home, and longer than usual hours in paid work. Results The perception of greater gender equality in the couple relationship was associated with higher levels of physical activity for both men and women. This remained significant when the other variables were controlled for. Amongst men the confidence intervals were high. Conclusions The results point to the potential of perceived gender equality in the couple relationship to counteract the general time poverty and household burden that often arises from the combination of paid work and responsibility for children and the home, especially for women. The high confidence intervals among men indicate the need for more research within the field with larger samples. PMID:26196280

  6. Viability PCR, a Culture-Independent Method for Rapid and Selective Quantification of Viable Legionella pneumophila Cells in Environmental Water Samples▿

    PubMed Central

    Delgado-Viscogliosi, Pilar; Solignac, Lydie; Delattre, Jean-Marie

    2009-01-01

    PCR-based methods have been developed to rapidly screen for Legionella pneumophila in water as an alternative to time-consuming culture techniques. However, these methods fail to discriminate between live and dead bacteria. Here, we report a viability assay (viability PCR [v-PCR]) for L. pneumophila that combines ethidium monoazide bromide with quantitative real-time PCR (qPCR). The ability of v-PCR to differentiate viable from nonviable L. pneumophila cells was confirmed with permeabilizing agents, toluene, or isopropanol. v-PCR suppressed more than 99.9% of the L. pneumophila PCR signal in nonviable cultures and was able to discriminate viable cells in mixed samples. A wide range of physiological states, from culturable to dead cells, was observed with 64 domestic hot-water samples after simultaneous quantification of L. pneumophila cells by v-PCR, conventional qPCR, and culture methods. v-PCR counts were equal to or higher than those obtained by culture and lower than or equal to conventional qPCR counts. v-PCR was used to successfully monitor in vitro the disinfection efficacy of heating to 70°C and glutaraldehyde and chlorine curative treatments. The v-PCR method appears to be a promising and rapid technique for enumerating L. pneumophila bacteria in water and, in comparison with conventional qPCR techniques used to monitor Legionella, has the advantage of selectively amplifying only viable cells. PMID:19363080

  7. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  8. Fast mass spectrometry-based enantiomeric excess determination of proteinogenic amino acids.

    PubMed

    Fleischer, Heidi; Thurow, Kerstin

    2013-03-01

    A rapid determination of the enantiomeric excess of proteinogenic amino acids is of great importance in various fields of chemical and biologic research and industries. Owing to their different biologic effects, enantiomers are interesting research subjects in drug development for the design of new and more efficient pharmaceuticals. Usually, the enantiomeric composition of amino acids is determined by conventional analytical methods such as liquid or gas chromatography or capillary electrophoresis. These analytical techniques do not fulfill the requirements of high-throughput screening due to their relative long analysis times. The method presented allows a fast analysis of chiral amino acids without previous time consuming chromatographic separation. The analytical measurements base on parallel kinetic resolution with pseudoenantiomeric mass tagged auxiliaries and were carried out by mass spectrometry with electrospray ionization. All 19 chiral proteinogenic amino acids were tested and Pro, Ser, Trp, His, and Glu were selected as model substrates for verification measurements. The enantiomeric excesses of amino acids with non-polar and aliphatic side chains as well as Trp and Phe (aromatic side chains) were determined with maximum deviations of the expected value less than or equal to 10ee%. Ser, Cys, His, Glu, and Asp were determined with deviations lower or equal to 14ee% and the enantiomeric excess of Tyr were calculated with 17ee% deviation. The total screening process is fully automated from the sample pretreatment to the data processing. The method presented enables fast measurement times about 1.38 min per sample and is applicable in the scope of high-throughput screenings.

  9. Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps

    NASA Astrophysics Data System (ADS)

    Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong

    2018-02-01

    Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.

  10. Method for control of NOx emission from combustors using fuel dilution

    DOEpatents

    Schefer, Robert W [Alamo, CA; Keller, Jay O [Oakland, CA

    2007-01-16

    A method of controlling NOx emission from combustors. The method involves the controlled addition of a diluent such as nitrogen or water vapor, to a base fuel to reduce the flame temperature, thereby reducing NOx production. At the same time, a gas capable of enhancing flame stability and improving low temperature combustion characteristics, such as hydrogen, is added to the fuel mixture. The base fuel can be natural gas for use in industrial and power generation gas turbines and other burners. However, the method described herein is equally applicable to other common fuels such as coal gas, biomass-derived fuels and other common hydrocarbon fuels. The unique combustion characteristics associated with the use of hydrogen, particularly faster flame speed, higher reaction rates, and increased resistance to fluid-mechanical strain, alter the burner combustion characteristics sufficiently to allow operation at the desired lower temperature conditions resulting from diluent addition, without the onset of unstable combustion that can arise at lower combustor operating temperatures.

  11. Decision feedback equalizer for holographic data storage.

    PubMed

    Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo

    2018-05-20

    Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.

  12. Adaptively combined FIR and functional link artificial neural network equalizer for nonlinear communication channel.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-04-01

    This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.

  13. Performance of DBS-Radio using concatenated coding and equalization

    NASA Technical Reports Server (NTRS)

    Gevargiz, J.; Bell, D.; Truong, L.; Vaisnys, A.; Suwitra, K.; Henson, P.

    1995-01-01

    The Direct Broadcast Satellite-Radio (DBS-R) receiver is being developed for operation in a multipath Rayleigh channel. This receiver uses equalization and concatenated coding, in addition to open loop and closed loop architectures for carrier demodulation and symbol synchronization. Performance test results of this receiver are presented in both AWGN and multipath Rayleigh channels. Simulation results show that the performance of the receiver operating in a multipath Rayleigh channel is significantly improved by using equalization. These results show that fractional-symbol equalization offers a performance advantage over full symbol equalization. Also presented is the base-line performance of the DBS-R receiver using concatenated coding and interleaving.

  14. Stochastic HKMDHE: A multi-objective contrast enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.

  15. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  16. Adaptive frequency-domain equalization for the transmission of the fundamental mode in a few-mode fiber.

    PubMed

    Bai, Neng; Xia, Cen; Li, Guifang

    2012-10-08

    We propose and experimentally demonstrate single-carrier adaptive frequency-domain equalization (SC-FDE) to mitigate multipath interference (MPI) for the transmission of the fundamental mode in a few-mode fiber. The FDE approach reduces computational complexity significantly compared to the time-domain equalization (TDE) approach while maintaining the same performance. Both FDE and TDE methods are evaluated by simulating long-haul fundamental-mode transmission using a few-mode fiber. For the fundamental mode operation, the required tap length of the equalizer depends on the differential mode group delay (DMGD) of a single span rather than DMGD of the entire link.

  17. Perspective: Quantum mechanical methods in biochemistry and biophysics.

    PubMed

    Cui, Qiang

    2016-10-14

    In this perspective article, I discuss several research topics relevant to quantum mechanical (QM) methods in biophysical and biochemical applications. Due to the immense complexity of biological problems, the key is to develop methods that are able to strike the proper balance of computational efficiency and accuracy for the problem of interest. Therefore, in addition to the development of novel ab initio and density functional theory based QM methods for the study of reactive events that involve complex motifs such as transition metal clusters in metalloenzymes, it is equally important to develop inexpensive QM methods and advanced classical or quantal force fields to describe different physicochemical properties of biomolecules and their behaviors in complex environments. Maintaining a solid connection of these more approximate methods with rigorous QM methods is essential to their transferability and robustness. Comparison to diverse experimental observables helps validate computational models and mechanistic hypotheses as well as driving further development of computational methodologies.

  18. All-fiber dynamic gain equalizer based on a twisted long-period grating written by high-frequency CO2 laser pulses.

    PubMed

    Zhu, T; Rao, Y J; Wang, J L

    2007-01-20

    A novel dynamic gain equalizer for flattening Er-doped fiber amplifiers based on a twisted long-period fiber grating (LPFG) induced by high-frequency CO(2) laser pulses is reported for the first time to our knowledge. Experimental results show that its transverse-load sensitivity is up to 0.34 dB/(g.mm(-1)), while the twist ratio of the twisted LPFG is approximately 20 rad/m, which is 7 times higher than that of a torsion-free LPFG. In addition, it is found that the strong orientation dependence of the transverse-load sensitivity of the torsion-free LPFG reported previously has been weakened considerably. Therefore such a dynamic gain equalizer based on the unique transverse-load characteristics of the twisted LPFG provides a much larger adjustable range and makes packaging of the gain equalizer much easier. A demonstration has been carried out to flatten an Er-doped fiber amplifier to +/-0.5 dB over a 32 nm bandwidth.

  19. Autoclave decomposition method for metals in soils and sediments.

    PubMed

    Navarrete-López, M; Jonathan, M P; Rodríguez-Espinosa, P F; Salgado-Galeana, J A

    2012-04-01

    Leaching of partially leached metals (Fe, Mn, Cd, Co, Cu, Ni, Pb, and Zn) was done using autoclave technique which was modified based on EPA 3051A digestion technique. The autoclave method was developed as an alternative to the regular digestion procedure passed the safety norms for partial extraction of metals in polytetrafluoroethylene (PFA vessel) with a low constant temperature (119.5° ± 1.5°C) and the recovery of elements were also precise. The autoclave method was also validated using two Standard Reference Materials (SRMs: Loam Soil B and Loam Soil D) and the recoveries were equally superior to the traditionally established digestion methods. Application of the autoclave was samples from different natural environments (beach, mangrove, river, and city soil) to reproduce the recovery of elements during subsequent analysis.

  20. Teaching the Economics of Equal Opportunities.

    ERIC Educational Resources Information Center

    Ownby, Arnola C.; Rhea, Jeanine N.

    1990-01-01

    Focuses on equal opportunities--for education, pay, and with gender bias for individuals and business organizations. Suggests that business educators can expand the implications to include ethnic-based inequalities as well. (JOW)

  1. Method for the simultaneous preparation of Radon-211, Xenon-125, Xenon-123, Astatine-211, Iodine-125 and Iodine-123

    DOEpatents

    Mirzadeh, Saed; Lambrecht, Richard M.

    1987-01-01

    A method for simultaneously preparing Radon-211, Astatine-211, Xenon-125, Xenon-123, Iodine-125 and Iodine-123 in a process that includes irradiating a fertile metal material then using a one-step chemical procedure to collect a first mixture of about equal amounts of Radon-211 and Xenon-125, and a separate second mixture of about equal amounts of Iodine-123 and Astatine-211.

  2. An Overview of Internal and External Fixation Methods for the Diabetic Charcot Foot and Ankle.

    PubMed

    Ramanujam, Crystal L; Zgonis, Thomas

    2017-01-01

    Diabetic Charcot neuroarthropathy (DCN) of the foot and ankle is a challenging disease with regard to clinical presentation, pathogenesis, and prognosis. Its surgical management is equally difficult to interpret based on the wide array of options available. In the presence of an ulceration or concomitant osteomyelitis, internal fixation by means of screws, plates, or intramedullary nailing needs to be avoided when feasible. External fixation becomes a great surgical tool when managing DCN with concomitant osteomyelitis. This article describes internal and external fixation methods along with available literature to enlighten surgeons faced with treating this complex condition. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. ART Or AGR: Deciphering Which Reserve Program is Best Suited for Today’s Total Force Structure

    DTIC Science & Technology

    2016-02-01

    opportunities, it also should address equal treatment base on not only race, gender , ethnicity, and sexual orientation, but also the employee’s status...7 Equality and Standardization...commanders and Airmen to be able to seamlessly work, manage, and be treated equally , in order to accomplish the mission. This paper analyzed the AFR full

  4. Teachers Negotiating Discourses of Gender (In) Equality: The Case of Equal Opportunities Reform in Andalusia

    ERIC Educational Resources Information Center

    Cubero, Mercedes; Santamaría, Andrés; Rebollo, Mª Ángeles; Cubero, Rosario; García, Rafael; Vega, Luisa

    2015-01-01

    This article is focused on the analysis of the narratives produced by a group of teachers, experts in coeducation, while they were discussing their everyday activities. They are responsible for the implementation of a Plan for Gender Equality in public secondary schools in Andalusia (Spain). This study is based on contributions about doing gender…

  5. Properties of an adaptive feedback equalization algorithm.

    PubMed

    Engebretson, A M; French-St George, M

    1993-01-01

    This paper describes a new approach to feedback equalization for hearing aids. The method involves the use of an adaptive algorithm that estimates and tracks the characteristic of the hearing aid feedback path. The algorithm is described and the results of simulation studies and bench testing are presented.

  6. Economic Analysis of Equal Educational Opportunity Programs.

    ERIC Educational Resources Information Center

    Mela, Ken

    1997-01-01

    Presents methods for assessing the impact and economic viability of federal equal-educational-opportunity programs, particularly in higher education. Techniques for gathering needed data and analyzing them are offered in the context of a hypothetical community college Veterans Upward Bound (VUB) program and two real VUB programs. (MSE)

  7. Using a Euclid distance discriminant method to find protein coding genes in the yeast genome.

    PubMed

    Zhang, Chun-Ting; Wang, Ju; Zhang, Ren

    2002-02-01

    The Euclid distance discriminant method is used to find protein coding genes in the yeast genome, based on the single nucleotide frequencies at three codon positions in the ORFs. The method is extremely simple and may be extended to find genes in prokaryotic genomes or eukaryotic genomes with less introns. Six-fold cross-validation tests have demonstrated that the accuracy of the algorithm is better than 93%. Based on this, it is found that the total number of protein coding genes in the yeast genome is less than or equal to 5579 only, about 3.8-7.0% less than 5800-6000, which is currently widely accepted. The base compositions at three codon positions are analyzed in details using a graphic method. The result shows that the preference codons adopted by yeast genes are of the RGW type, where R, G and W indicate the bases of purine, non-G and A/T, whereas the 'codons' in the intergenic sequences are of the form NNN, where N denotes any base. This fact constitutes the basis of the algorithm to distinguish between coding and non-coding ORFs in the yeast genome. The names of putative non-coding ORFs are listed here in detail.

  8. A new algorithm for real-time optimal dispatch of active and reactive power generation retaining nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, L.; Rao, N.D.

    1983-04-01

    This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less

  9. Method Of Dispensing Microdoses Of A Aqueous Solutions Of S Ubstances Onto A Carrier And A Device For Carrying Out Said Method

    DOEpatents

    Ershov, Gennady Moiseevich; Kirillov, Eugenii Vladislavovich; Mirzabekov, Andrei Darievich

    1999-10-05

    A method and a device for dispensing microdoses of aqueous solutions are provided, whereby the substance is transferred by the free surface end of a rodlike transferring element; the temperature of the transferring element is maintained at essentially the dew point of the ambient air during the transfer. The device may comprise a plate-like base to which are affixed a plurality of rods; the unfixed butt ends of the rods are coplanar. The device further comprises a means for maintaining the temperature of the unfixed butt ends of the rods essentially equal to the dew point of the ambient air during transfer of the aqueous substance

  10. Power-output regularization in global sound equalization.

    PubMed

    Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn

    2008-01-01

    The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.

  11. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  12. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    DTIC Science & Technology

    2017-12-01

    values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression

  13. Cost optimization for buildings with hybrid ventilation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Kun; Lu, Yan

    A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.

  14. Least-squares finite element solutions for three-dimensional backward-facing step flow

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Hou, Lin-Jun; Lin, Tsung-Liang

    1993-01-01

    Comprehensive numerical solutions of the steady state incompressible viscous flow over a three-dimensional backward-facing step up to Re equals 800 are presented. The results are obtained by the least-squares finite element method (LSFEM) which is based on the velocity-pressure-vorticity formulation. The computed model is of the same size as that of Armaly's experiment. Three-dimensional phenomena are observed even at low Reynolds number. The calculated values of the primary reattachment length are in good agreement with experimental results.

  15. Achieving equal pay for comparable worth through arbitration.

    PubMed

    Wisniewski, S C

    1982-01-01

    Traditional "women's jobs" often pay relatively low wages because of the effects of institutionalized stereotypes concerning women and their role in the work place. One way of dealing with sex discrimination that results in job segregation is to narrow the existing wage differential between "men's jobs" and "women's jobs." Where the jobs are dissimilar on their face, this narrowing of pay differences involves implementing the concept of "equal pay for jobs of comparable worth." Some time in the future, far-reaching, perhaps even industrywide, reductions in male-female pay differentials may be achieved by pursuing legal remedies based on equal pay for comparable worth. However, as the author demonstrates, immediate, albeit more limited, relief for sex-based pay inequities found in specific work places can be obtained by implementing equal pay for jobs of comparable worth through the collective bargaining and arbitration processes.

  16. Ex-ante and ex-post measurement of equality of opportunity in health: a normative decomposition.

    PubMed

    Donni, Paolo Li; Peragine, Vito; Pignataro, Giuseppe

    2014-02-01

    This paper proposes and discusses two different approaches to the definition of inequality in health: the ex-ante and the ex-post approach. It proposes strategies for measuring inequality of opportunity in health based on the path-independent Atkinson equality index. The proposed methodology is illustrated using data from the British Household Panel Survey; the results suggest that in the period 2000-2005, at least one-third of the observed health equalities in the UK were equalities of opportunity. Copyright © 2013 John Wiley & Sons, Ltd.

  17. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Nonlinear filter based decision feedback equalizer for optical communication systems.

    PubMed

    Han, Xiaoqi; Cheng, Chi-Hao

    2014-04-07

    Nonlinear impairments in optical communication system have become a major concern of optical engineers. In this paper, we demonstrate that utilizing a nonlinear filter based Decision Feedback Equalizer (DFE) with error detection capability can deliver a better performance compared with the conventional linear filter based DFE. The proposed algorithms are tested in simulation using a coherent 100 Gb/sec 16-QAM optical communication system in a legacy optical network setting.

  19. Microarray image analysis: background estimation using quantile and morphological filters.

    PubMed

    Bengtsson, Anders; Bengtsson, Henrik

    2006-02-28

    In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in order to get unbiased estimates these filters have to be implemented with great care. The performance of morphological opening is in general poor with a substantial spatial-dependent bias.

  20. Wavelet imaging cleaning method for atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.

    2002-07-01

    We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.

  1. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  2. Finding New Ways To Finance Public Education.

    ERIC Educational Resources Information Center

    Wugalter, Harry

    This paper discusses alternative methods for financing education including sales and compensating taxes, mineral leasing, and land income. The author discusses the problem of local control under a full State funding system. He warns that merely allocating money to school districts on an equal basis will fail to accomplish equal education unless…

  3. Equal Employment Opportunity and ADA Implications of Screening and Selection.

    ERIC Educational Resources Information Center

    Norton, Steven D.; Hundley, John R.

    1995-01-01

    The process of screening and selecting new employees is viewed as one having discrete steps, each with implications concerning Equal Employment Opportunity and the Americans with Disabilities Act. Several screening and selection methods are examined, with questionnaire forms used by Indiana University, South Bend provided for illustration. Typical…

  4. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  5. 21 CFR 165.110 - Bottled water.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., when a composite of analytical units of equal volume from a sample is examined by the method described...)(A) Bottled water shall, when a composite of analytical units of equal volume from a sample is..., and Cosmetic Act, the Food and Drug Administration has determined that bottled water, when a composite...

  6. Law and Equal Rights for Educational Opportunity.

    ERIC Educational Resources Information Center

    White, Sharon

    Arguing violation of the equal protection clause of Federal and State constitutions, court actions in several States have challenged the method of financing public education. The issues raised concern interdistrict differentials in assessed valuation of properties. These differentials result in lower per-pupil funds for urban and rural districts…

  7. 75 FR 10505 - Office of Apprenticeship, Notice of Town Hall Meeting on Federal Regulations for Equal Employment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-08

    ..., including women and minorities, have equal opportunities in registered apprenticeship programs. Revisions to... particularly effective in recruiting women and minorities for registered apprenticeship programs, such as... in retaining women and minorities in registered apprenticeship programs; The methods sponsors use for...

  8. Diversity, Inclusion, and Equal Opportunity in the Armed Services: Background and Issues for Congress

    DTIC Science & Technology

    2016-07-01

    from unlawful discrimination based on race, color, national origin, religion, sex (including pregnancy , gender identity, and sexual orientation when...range of online resources for diversity management and equal opportunity programming. DEOMI’s Research Directorate administers a survey called the...Defense Equal Opportunity Climate Survey (DEOCS). This survey is intended to be a tool for commanders to improve their organizational culture. It

  9. Implications of Changes in Households and Living Arrangements for Future Home-based Care Needs and Costs of Disabled Elders in China1

    PubMed Central

    Zeng, Yi; Chen, Huashuai; Wang, Zhenglian; Land, Kenneth C.

    2016-01-01

    Objectives Understand future home-based care needs/costs for disabled elders in China. Method Further develop/apply ProFamy extended cohort-component method. Results (1) Chinese disabled elders and percentage of national GDP devoted to home-based care costs for disabled elders will increase much quicker than growth of total elderly population; (2) Home-based care needs/costs for disabled oldest-old aged 80+ will increase much faster than that for disabled young-old aged 65–79 after 2030; (3) Disabled unmarried elders living alone and their home-based care costs increase substantially faster than disabled unmarried elders living with children; (4) Sensitivity analyses shown that possible changes in mortality and elderly disability status are the major factors affecting home-based care needs and costs; (5) Caregivers resources under two-child policy will be substantially better than under current fertility policy unchanged. Discussion Policy recommendations concerning reductions of prevalence of disability, gender equality, two-child policy, encouraging elder’s residential proximity to their adult children, etc. PMID:25213460

  10. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.

  11. Inequality in fertility rate and modern contraceptive use among Ghanaian women from 1988–2008

    PubMed Central

    2013-01-01

    Background In most resource poor countries, particularly sub-Saharan Africa, modern contraceptive use and prevalence is unusually low and fertility is very high resulting in rapid population growth and high maternal mortality and morbidity. Current evidence shows slow progress in expanding the use of contraceptives by women of low socioeconomic status and insufficient financial commitment to family planning programs. We examined gaps and trends in modern contraceptive use and fertility within different socio-demographic subgroups in Ghana between 1988 and 2008. Methods We constructed a database using the Women’s Questionnaire from the Ghana Demographic and Health Survey (GDHS) 1988, 1993, 1998, 2003 and 2008. We applied regression-based Total Attributable Fraction (TAF); we also calculated the Relative and Slope Indices of Inequality (RII and SII) to complement the TAF in our investigation. Results Equality in use of modern contraceptives increased from 1988 to 2008. In contrast, inequality in fertility rate increased from 1988 to 2008. It was also found that rural–urban residence gap in the use of modern contraceptive methods had almost disappeared in 2008, while education and income related inequalities remained. Conclusions One obvious observation is that the discrepancy between equality in use of contraceptives and equality in fertility must be addressed in a future revision of policies related to family planning. Otherwise this could be a major obstacle for attaining further progress in achieving the Millennium Development Goal (MDG) 5. More research into the causes of the unfortunate discrepancy is urgently needed. There still exist significant education and income related inequalities in both parameters that need appropriate action. PMID:23718745

  12. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  13. Thermal Stress Analysis of a Continuous and Pulsed End-Pumped Nd:YAG Rod Crystal Using Non-Classic Conduction Heat Transfer Theory

    NASA Astrophysics Data System (ADS)

    Mojahedi, Mahdi; Shekoohinejad, Hamidreza

    2018-02-01

    In this paper, temperature distribution in the continuous and pulsed end-pumped Nd:YAG rod crystal is determined using nonclassical and classical heat conduction theories. In order to find the temperature distribution in crystal, heat transfer differential equations of crystal with consideration of boundary conditions are derived based on non-Fourier's model and temperature distribution of the crystal is achieved by an analytical method. Then, by transferring non-Fourier differential equations to matrix equations, using finite element method, temperature and stress of every point of crystal are calculated in the time domain. According to the results, a comparison between classical and nonclassical theories is represented to investigate rupture power values. In continuous end pumping with equal input powers, non-Fourier theory predicts greater temperature and stress compared to Fourier theory. It also shows that with an increase in relaxation time, crystal rupture power decreases. Despite of these results, in single rectangular pulsed end-pumping condition, with an equal input power, Fourier theory indicates higher temperature and stress rather than non-Fourier theory. It is also observed that, when the relaxation time increases, maximum amounts of temperature and stress decrease.

  14. Metabolic alterations in pregnant women: gestational diabetes.

    PubMed

    Oliveira, Daniela; Pereira, Joana; Fernandes, Rúben

    2012-01-01

    Gestational diabetes mellitus (GDM) and controversy are old friends. The impact of GDM on maternal and fetal health has been increasingly recognized. Nevertheless, universal consensus on the diagnostic methods and thresholds has long been lacking. Published guidelines from major societies differ significantly from one another, with recommendations ranging from aggressive screening to no routine screening at all. As a result, real-world practice is equally varied. This article recaps the latest evidence-based recommendations for the diagnosis and classification of GDM. It reviews the current evidence base for intensive multidisciplinary treatment of GDM and provides recommendations for postpartum management to delay and/or prevent progression to type 2 diabetes.

  15. Using a Root Cause Analysis Curriculum for Practice-Based Learning and Improvement in General Surgery Residency.

    PubMed

    Ramanathan, Rajesh; Duane, Therese M; Kaplan, Brian J; Farquhar, Doris; Kasirajan, Vigneshwar; Ferrada, Paula

    2015-01-01

    To describe and evaluate a root cause analysis (RCA)-based educational curriculum for quality improvement (QI) practice-based learning and implementation in general surgery residency. A QI curriculum was designed using RCA and spaced-learning approaches to education. The program included a didactic session about the RCA methodology. Resident teams comprising multiple postgraduate years then selected a personal complication, completed an RCA, and presented the findings to the Department of Surgery. Mixed methods consisting of quantitative assessment of performance and qualitative feedback about the program were used to assess the value, strengths, and limitations of the program. Urban tertiary academic medical center. General surgery residents, faculty, and medical students. An RCA was completed by 4 resident teams for the following 4 adverse outcomes: postoperative neck hematoma, suboptimal massive transfusion for trauma, venous thromboembolism, and decubitus ulcer complications. Quantitative peer assessment of their performance revealed proficiency in selecting an appropriate case, defining the central problem, identifying root causes, and proposing solutions. During the qualitative feedback assessment, residents noted value of the course, with the greatest limitation being time constraints and equal participation. An RCA-based curriculum can provide general surgery residents with QI exposure and training that they value. Barriers to successful implementation include time restrictions and equal participation from all involved members. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  16. Five preference-based indexes in cataract and heart failure patients were not equally responsive to change.

    PubMed

    Kaplan, Robert M; Tally, Steven; Hays, Ron D; Feeny, David; Ganiats, Theodore G; Palta, Mari; Fryback, Dennis G

    2011-05-01

    To compare the responsiveness to clinical change of five widely used preference-based health-related quality-of-life indexes in two longitudinal cohorts. Five generic instruments were simultaneously administered to 376 adults undergoing cataract surgery and 160 adults in heart failure management programs. Patients were assessed at baseline and reevaluated after 1 and 6 months. The measures were the Short Form (SF)-6D (based on responses scored from SF-36v2), Self-Administered Quality of Well-being Scale (QWB-SA), the EuroQol-5D developed by the EuroQol Group, the Health Utilities Indexes Mark 2 (HUI2) and Mark 3 (HUI3). Cataract patients completed the National Eye Institute Visual Functioning Questionnaire-25, and heart failure patients completed the Minnesota Living with Heart Failure Questionnaire. Responsiveness was estimated by the standardized response mean. For cataract patients, mean changes between baseline and 1-month follow-up for the generic indices ranged from 0.00 (SF-6D) to 0.052 (HUI3) and were statistically significant for all indexes except the SF-6D. For heart failure patients, only the SF-6D showed significant change from baseline to 1 month, whereas only the QWB-SA change was significant between 1 and 6 months. Preference-based methods for measuring health outcomes are not equally responsive to change. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Scattering and absorption control in biocompatible fibers towards equalized photobiomodulation.

    PubMed

    George, J; Haghshenas, H; d'Hemecourt, D; Zhu, W; Zhang, L; Sorger, V

    2017-03-01

    Transparent tissue scaffolds enable illumination of growing tissue to accelerate cell proliferation and improve other cell functions through photobiomodulation. The biphasic dose response of cells exposed to photobiomodulating light dictates that the illumination be evenly distributed across the scaffold such that the cells are neither under nor over exposed to light. However, equalized illumination has not been sufficiently addressed. Here we analyze and experimentally demonstrate spatially equalizing illumination by three methods, namely: engineered surface scattering, reflection by a gold mirror, and traveling-waves in a ring mesh. Our results show that nearly equalized illumination is achievable by controlling the light scattering-to-loss ratio. This demonstration furthers opportunities for dose-optimized photobiomodulation in tissue regeneration.

  18. Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows

    NASA Astrophysics Data System (ADS)

    Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.

    2016-09-01

    A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.

  19. Perceptron Genetic to Recognize Openning Strategy Ruy Lopez

    NASA Astrophysics Data System (ADS)

    Azmi, Zulfian; Mawengkang, Herman

    2018-01-01

    The application of Perceptron method is not effective for coding on hardware based systems because it is not real time learning. With Genetic algorithm approach in calculating and searching the best weight (fitness value) system will do learning only one iteration. And the results of this analysis were tested in the case of the introduction of the opening pattern of chess Ruy Lopez. The Analysis with Perceptron Model with Algorithm Approach Genetics from group Artificial Neural Network for open Ruy Lopez. The data is processed with base open chess, with step eight a position white Pion from end open chess. Using perceptron method have many input and one output process many weight and refraction until output equal goal. Data trained and test with software Matlab and system can recognize the chess opening Ruy Lopez or Not open Ruy Lopez with Real time.

  20. Omani Girls' Conceptions of Gender Equality: Addressing Socially Constructed Sexist Attitudes through Educational Intervention

    ERIC Educational Resources Information Center

    Al Sadi, Fatma H.; Basit, Tehmina N.

    2017-01-01

    This paper is based on a quasi-experimental study which examines the effects of a school-based intervention on Omani girls' attitudes towards the notion of gender equality. A questionnaire was administered before and after the intervention to 241 girls (116 in the experimental group; 125 in the control group). A semi-structured interview was…

  1. Adaptive filtering with the self-organizing map: a performance comparison.

    PubMed

    Barreto, Guilherme A; Souza, Luís Gustavo M

    2006-01-01

    In this paper we provide an in-depth evaluation of the SOM as a feasible tool for nonlinear adaptive filtering. A comprehensive survey of existing SOM-based and related architectures for learning input-output mappings is carried out and the application of these architectures to nonlinear adaptive filtering is formulated. Then, we introduce two simple procedures for building RBF-based nonlinear filters using the Vector-Quantized Temporal Associative Memory (VQTAM), a recently proposed method for learning dynamical input-output mappings using the SOM. The aforementioned SOM-based adaptive filters are compared with standard FIR/LMS and FIR/LMS-Newton linear transversal filters, as well as with powerful MLP-based filters in nonlinear channel equalization and inverse modeling tasks. The obtained results in both tasks indicate that SOM-based filters can consistently outperform powerful MLP-based ones.

  2. Statistical methods for detecting and comparing periodic data and their application to the nycthemeral rhythm of bodily harm: A population based study

    PubMed Central

    2010-01-01

    Background Animals, including humans, exhibit a variety of biological rhythms. This article describes a method for the detection and simultaneous comparison of multiple nycthemeral rhythms. Methods A statistical method for detecting periodic patterns in time-related data via harmonic regression is described. The method is particularly capable of detecting nycthemeral rhythms in medical data. Additionally a method for simultaneously comparing two or more periodic patterns is described, which derives from the analysis of variance (ANOVA). This method statistically confirms or rejects equality of periodic patterns. Mathematical descriptions of the detecting method and the comparing method are displayed. Results Nycthemeral rhythms of incidents of bodily harm in Middle Franconia are analyzed in order to demonstrate both methods. Every day of the week showed a significant nycthemeral rhythm of bodily harm. These seven patterns of the week were compared to each other revealing only two different nycthemeral rhythms, one for Friday and Saturday and one for the other weekdays. PMID:21059197

  3. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  4. MRI-determined liver proton density fat fraction, with MRS validation: Comparison of regions of interest sampling methods in patients with type 2 diabetes.

    PubMed

    Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An

    2016-05-01

    To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P < 0.05) with ROI sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.

  5. Advancing Research on Racial–Ethnic Health Disparities: Improving Measurement Equivalence in Studies with Diverse Samples

    PubMed Central

    Landrine, Hope; Corral, Irma

    2014-01-01

    To conduct meaningful, epidemiologic research on racial–ethnic health disparities, racial–ethnic samples must be rendered equivalent on other social status and contextual variables via statistical controls of those extraneous factors. The racial–ethnic groups must also be equally familiar with and have similar responses to the methods and measures used to collect health data, must have equal opportunity to participate in the research, and must be equally representative of their respective populations. In the absence of such measurement equivalence, studies of racial–ethnic health disparities are confounded by a plethora of unmeasured, uncontrolled correlates of race–ethnicity. Those correlates render the samples, methods, and measures incomparable across racial–ethnic groups, and diminish the ability to attribute health differences discovered to race–ethnicity vs. to its correlates. This paper reviews the non-equivalent yet normative samples, methodologies and measures used in epidemiologic studies of racial–ethnic health disparities, and provides concrete suggestions for improving sample, method, and scalar measurement equivalence. PMID:25566524

  6. Thermo-electric modular structure and method of making same

    DOEpatents

    Freedman, N.S.; Horsting, C.W.; Lawrence, W.F.; Carrona, J.J.

    1974-01-29

    A method is presented for making a thermoelectric module wtth the aid of an insulating wafer having opposite metallized surfaces, a pair of similar equalizing sheets of metal, a hot-junction strap of metal, a thermoelectric element having hot- and cold-junction surfaces, and a radiator sheet of metal. The method comprises the following steps: brazing said equalizer sheets to said opposite metallized surfaces, respectively, of said insulating wafer with pure copper in a non-oxidizing ambient; brazing one surface of said hot-junction strap to one of the surfaces of said equalizing sheet with a nickel-gold alloy in a non- oxidizing ambient; and diffusion bonding said hot-junction surface of said thermoelectric element to the other surface of said hot-junction strap and said radiator sheet to said cold-junction surface of said thermoelectric element, said diffusion bonding being carried out in a non-oxidizing ambient, under compressive loading, at a temperature of about 550 deg C., and for about one-half hour. (Official Gazette)

  7. Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2017-01-01

    An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710

  8. Multimodal biometric method that combines veins, prints, and shape of a finger

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo

    2011-01-01

    Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.

  9. Maximizing the Spread of Influence via Generalized Degree Discount.

    PubMed

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.

  10. Maximizing the Spread of Influence via Generalized Degree Discount

    PubMed Central

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods. PMID:27732681

  11. Vertical-cavity surface-emitting lasers come of age

    NASA Astrophysics Data System (ADS)

    Morgan, Robert A.; Lehman, John A.; Hibbs-Brenner, Mary K.

    1996-04-01

    This manuscript reviews our efforts in demonstrating state-of-the-art planar, batch-fabricable, high-performance vertical-cavity surface-emitting lasers (VCSELs). All performance requirements for short-haul data communication applications are clearly established. We concentrate on the flexibility of the established proton-implanted AlGaAs-based (emitting near 850 nm) technology platform, focusing on a standard device design. This structure is shown to meet or exceed performance and producibility requirements. These include > 99% device yield across 3-in-dia. metal-organic vapor phase epitaxy (MOVPE)-grown wafers and wavelength operation across a > 100-nm range. Recent progress in device performance [low threshold voltage (Vth equals 1.53 V); threshold current (Ith equals 0.68 mA); continuous wave (CW) power (Pcw equals 59 mW); maximum and minimum CW lasing temperature (T equals 200 degree(s)C, 10 K); and wall-plug efficiencies ((eta) wp equals 28%)] should enable great advances in VCSEL-based technologies. We also discuss the viability of VCSELs in cryogenic and avionic/military environments. Also reviewed is a novel technique, modifying this established platform, to engineer low-threshold, high-speed, single- mode VCSELs.

  12. A Social Recognition Approach to Autonomy: The Role of Equality-Based Respect.

    PubMed

    Renger, Daniela; Renger, Sophus; Miché, Marcel; Simon, Bernd

    2017-04-01

    Inspired by philosophical reasoning about the connection between equality and freedom, we examined whether experiences of (equality-based) respect increase perceived autonomy. This link was tested with generalized experiences of respect and autonomy people make in their daily lives (Study 1) and with more specific experiences of employees at the workplace (Study 2). In both studies, respect strongly and independently contributed to perceived autonomy over and above other forms of social recognition (need-based care and achievement-based social esteem) and further affected (life/work) satisfaction. Study 3 experimentally confirmed the hypothesized causal influence of respect on perceived autonomy and demonstrated that this effect further translates into social cooperation. The respect-cooperation link was simultaneously mediated by perceived autonomy and superordinate collective identification. We discuss how the recognition approach, which differentiates between respect, care, and social esteem, can enrich research on autonomy.

  13. Robust Skull-Stripping Segmentation Based on Irrational Mask for Magnetic Resonance Brain Images.

    PubMed

    Moldovanu, Simona; Moraru, Luminița; Biswas, Anjan

    2015-12-01

    This paper proposes a new method for simple, efficient, and robust removal of the non-brain tissues in MR images based on an irrational mask for filtration within a binary morphological operation framework. The proposed skull-stripping segmentation is based on two irrational 3 × 3 and 5 × 5 masks, having the sum of its weights equal to the transcendental number π value provided by the Gregory-Leibniz infinite series. It allows maintaining a lower rate of useful pixel loss. The proposed method has been tested in two ways. First, it has been validated as a binary method by comparing and contrasting with Otsu's, Sauvola's, Niblack's, and Bernsen's binary methods. Secondly, its accuracy has been verified against three state-of-the-art skull-stripping methods: the graph cuts method, the method based on Chan-Vese active contour model, and the simplex mesh and histogram analysis skull stripping. The performance of the proposed method has been assessed using the Dice scores, overlap and extra fractions, and sensitivity and specificity as statistical methods. The gold standard has been provided by two neurologist experts. The proposed method has been tested and validated on 26 image series which contain 216 images from two publicly available databases: the Whole Brain Atlas and the Internet Brain Segmentation Repository that include a highly variable sample population (with reference to age, sex, healthy/diseased). The approach performs accurately on both standardized databases. The main advantage of the proposed method is its robustness and speed.

  14. European validation of Real-Time PCR method for detection of Salmonella spp. in pork meat.

    PubMed

    Delibato, Elisabetta; Rodriguez-Lazaro, David; Gianfranceschi, Monica; De Cesare, Alessandra; Comin, Damiano; Gattuso, Antonietta; Hernandez, Marta; Sonnessa, Michele; Pasquali, Frédérique; Sreter-Lancz, Zuzsanna; Saiz-Abajo, María-José; Pérez-De-Juan, Javier; Butrón, Javier; Prukner-Radovcic, Estella; Horvatek Tomic, Danijela; Johannessen, Gro S; Jakočiūnė, Džiuginta; Olsen, John E; Chemaly, Marianne; Le Gall, Francoise; González-García, Patricia; Lettini, Antonia Anna; Lukac, Maja; Quesne, Segolénè; Zampieron, Claudia; De Santis, Paola; Lovari, Sarah; Bertasi, Barbara; Pavoni, Enrico; Proroga, Yolande T R; Capuano, Federico; Manfreda, Gerardo; De Medici, Dario

    2014-08-01

    The classical microbiological method for detection of Salmonella spp. requires more than five days for final confirmation, and consequently there is a need for an alternative methodology for detection of this pathogen particularly in those food categories with a short shelf-life. This study presents an international (at European level) ISO 16140-based validation study of a non-proprietary Real-Time PCR-based method that can generate final results the day following sample analysis. It is based on an ISO compatible enrichment coupled to an easy and inexpensive DNA extraction and a consolidated Real-Time PCR assay. Thirteen laboratories from seven European Countries participated to this trial, and pork meat was selected as food model. The limit of detection observed was down to 10 CFU per 25 g of sample, showing excellent concordance and accordance values between samples and laboratories (100%). In addition, excellent values were obtained for relative accuracy, specificity and sensitivity (100%) when the results obtained for the Real-Time PCR-based methods were compared to those of the ISO 6579:2002 standard method. The results of this international trial demonstrate that the evaluated Real-Time PCR-based method represents an excellent alternative to the ISO standard. In fact, it shows an equal and solid performance as well as it reduces dramatically the extent of the analytical process, and can be easily implemented routinely by the Competent Authorities and Food Industry laboratories. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Validated stability-indicating spectrophotometric methods for the determination of Silodosin in the presence of its degradation products.

    PubMed

    Boltia, Shereen A; Abdelkawy, Mohammed; Mohammed, Taghreed A; Mostafa, Nahla N

    2018-09-05

    Five simple, rapid, accurate, and precise spectrophotometric methods are developed for the determination of Silodosin (SLD) in the presence of its acid induced and oxidative induced degradation products. Method A is based on dual wavelength (DW) method; two wavelengths are selected at which the absorbance of the oxidative induced degradation product is the same, so wavelengths 352 and 377 nm are used to determine SLD in the presence of its oxidative induced degradation product. Method B depends on induced dual wavelength theory (IDW), which is based on selecting two wavelengths on the zero-order spectrum of SLD where the difference in absorbance between them for the spectrum of acid induced degradation products is not equal to zero so through multiplying by the equality factor, the absorption difference is made to be zero for the acid induced degradation product while it is still significant for SLD. Method C is first derivative ( 1 D) spectrophotometry of SLD and its degradation products. Peak amplitudes are measured at 317 and 357 nm. Method D is ratio difference spectrophotometry (RD) where the drug is determined by the difference in amplitude between two selected wavelengths, at 350 and 277 nm for the ratio spectrum of SLD and its acid induced degradation products while for the ratio spectrum of SLD and its oxidative induced degradation products the difference in amplitude is measured at 345 and 292 nm. Method E depends on measuring peak amplitudes of the first derivative of the ratio ( 1 DD) where peak amplitudes are measured at 330 nm in the presence of the acid induced degradation product and measured by peak to peak technique at 326 and 369 nm in the presence of the oxidative induced degradation product. The proposed methods are validated according to ICH recommendations. The calibration curves for all the proposed methods are linear over a concentration range of 5-70 μg/mL. The selectivity of the proposed methods was tested using different laboratory prepared mixtures of SLD with either its acid induced or oxidative induced degradation products showing specificity of SLD with accepted recovery values. The proposed methods have been successfully applied to the analysis of SLD in pharmaceutical dosage forms without interference from additives. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  17. Risk Assessment and Hierarchical Risk Management of Enterprises in Chemical Industrial Parks Based on Catastrophe Theory

    PubMed Central

    Chen, Yu; Song, Guobao; Yang, Fenglin; Zhang, Shushen; Zhang, Yun; Liu, Zhenyu

    2012-01-01

    According to risk systems theory and the characteristics of the chemical industry, an index system was established for risk assessment of enterprises in chemical industrial parks (CIPs) based on the inherent risk of the source, effectiveness of the prevention and control mechanism, and vulnerability of the receptor. A comprehensive risk assessment method based on catastrophe theory was then proposed and used to analyze the risk levels of ten major chemical enterprises in the Songmu Island CIP, China. According to the principle of equal distribution function, the chemical enterprise risk level was divided into the following five levels: 1.0 (very safe), 0.8 (safe), 0.6 (generally recognized as safe, GRAS), 0.4 (unsafe), 0.2 (very unsafe). The results revealed five enterprises (50%) with an unsafe risk level, and another five enterprises (50%) at the generally recognized as safe risk level. This method solves the multi-objective evaluation and decision-making problem. Additionally, this method involves simple calculations and provides an effective technique for risk assessment and hierarchical risk management of enterprises in CIPs. PMID:23208298

  18. An upstream burst-mode equalization scheme for 40 Gb/s TWDM PON based on optimized SOA cascade

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chang, Qingjiang; Gao, Zhensen; Ye, Chenhui; Xiao, Simiao; Huang, Xiaoan; Hu, Xiaofeng; Zhang, Kaibin

    2016-02-01

    We present a novel upstream burst-mode equalization scheme based on optimized SOA cascade for 40 Gb/s TWDMPON. The power equalizer is placed at the OLT which consists of two SOAs, two circulators, an optical NOT gate, and a variable optical attenuator. The first SOA operates in the linear region which acts as a pre-amplifier to let the second SOA operate in the saturation region. The upstream burst signals are equalized through the second SOA via nonlinear amplification. From theoretical analysis, this scheme gives sufficient dynamic range suppression up to 16.7 dB without any dynamic control or signal degradation. In addition, a total power budget extension of 9.3 dB for loud packets and 26 dB for soft packets has been achieved to allow longer transmission distance and increased splitting ratio.

  19. A Summary Score for the Framingham Heart Study Neuropsychological Battery.

    PubMed

    Downer, Brian; Fardo, David W; Schmitt, Frederick A

    2015-10-01

    To calculate three summary scores of the Framingham Heart Study neuropsychological battery and determine which score best differentiates between subjects classified as having normal cognition, test-based impaired learning and memory, test-based multidomain impairment, and dementia. The final sample included 2,503 participants. Three summary scores were assessed: (a) composite score that provided equal weight to each subtest, (b) composite score that provided equal weight to each cognitive domain assessed by the neuropsychological battery, and (c) abbreviated score comprised of subtests for learning and memory. Receiver operating characteristic analysis was used to determine which summary score best differentiated between the four cognitive states. The summary score that provided equal weight to each subtest best differentiated between the four cognitive states. A summary score that provides equal weight to each subtest is an efficient way to utilize all of the cognitive data collected by a neuropsychological battery. © The Author(s) 2015.

  20. Active magnetic refrigerants based on Gd-Si-Ge material and refrigeration apparatus and process

    DOEpatents

    Gschneidner, Jr., Karl A.; Pecharsky, Vitalij K.

    1998-04-28

    Active magnetic regenerator and method using Gd.sub.5 (Si.sub.x Ge.sub.1-x).sub.4, where x is equal to or less than 0.5, as a magnetic refrigerant that exhibits a reversible ferromagnetic/antiferromagnetic or ferromagnetic-II/ferromagnetic-I first order phase transition and extraordinary magneto-thermal properties, such as a giant magnetocaloric effect, that renders the refrigerant more efficient and useful than existing magnetic refrigerants for commercialization of magnetic regenerators. The reversible first order phase transition is tunable from approximately 30 K to approximately 290 K (near room temperature) and above by compositional adjustments. The active magnetic regenerator and method can function for refrigerating, air conditioning, and liquefying low temperature cryogens with significantly improved efficiency and operating temperature range from approximately 10 K to 300 K and above. Also an active magnetic regenerator and method using Gd.sub.5 (Si.sub.x Ge.sub.1-x).sub.4, where x is equal to or greater than 0.5, as a magnetic heater/refrigerant that exhibits a reversible ferromagnetic/paramagnetic second order phase transition with large magneto-thermal properties, such as a large magnetocaloric effect that permits the commercialization of a magnetic heat pump and/or refrigerant. This second order phase transition is tunable from approximately 280 K (near room temperature) to approximately 350 K by composition adjustments. The active magnetic regenerator and method can function for low level heating for climate control for buildings, homes and automobile, and chemical processing.

Top